[00:00] <mxc> the mms charm will be at github.com/docmunch/mongo-monitoring-service-charm
[00:00] <mxc> (mms charm seems potentially ambiguous)
[00:00] <lazypower> Depends on how you present the information to the user. Maybe open a MR on the mongodb charm once the subordinate is approved by charm review to mention that it exists.
[00:01] <lazypower> unless you mean just using the acronym, in which case, ideed. It could be installing soggy waffles for all i know.
[00:01] <mxc> yea, i meant MMS may not be super clear
[00:02] <lazypower> Agreed. Less of the jargon, more of the substance.
[00:02] <lazypower> ooo you're a haskell guy huh?
[00:02] <lazypower> *person
[00:03] <mxc> yup
[00:03] <mxc> docmunch is haskell in the back, typescript + angular in the front
[00:03] <lazypower> I'll have to pick your brain someday
[00:03] <mxc> are you a haskeller?
[00:03] <lazypower> I'm a polyglot. I sit in many different camps :)
[00:04] <mxc> very few haskellers write nothing but haskell
[00:04] <mxc> we call them "academics"
[00:04] <lazypower> I guess more to the point, not yet - but will be soon if my career continues on its current trajectory
[00:04] <mxc> what do you mean?
[00:05] <lazypower> I'm writing ruby and C# mostly for my day job, and we are starting to get into larger data sets. Ruby is losing performance and I really dont enjoy the MS Toolchain for licensing reasons. So I'll be looking for something more suited to what i'm doing. Haskell and Go come to mind.
[00:09] <lazypower> Its that or find someone that knows R - but this is all offtopic.
[00:10] <mxc> brb
[01:02] <marcoceppi> lazypower: mxc Jenkins and Jenkins-lxc
[01:03] <lazypower> For charm testing?
[01:03] <marcoceppi> jenkins-lxc adds LXC container support to Jenkins, so it's complimentary
[01:03] <lazypower> Ah! good call.
[01:07] <lazypower> I have a question about best practices on charming - This particular application depends on MongoDB. Is it a good idea to have the charm by default install mongodb and consume the local resource, but also support using an external mongodb server through relation, that would strip the local mongodb server?
[01:07] <lazypower> or should I pick a road and stick with it?
[01:08] <mxc> marcoceppi: thanks
[01:09] <mxc> marcoceppi: while I've got your attention, did you see my security group question from about 3 hrs ago?
[01:13] <marcoceppi> mxc: I did, but I was just leaving
[01:14] <marcoceppi> mxc: that answer wont work for juju > 1.0
[01:14] <mxc> sorry to bother you then.  thanks for that heads up
[01:14] <marcoceppi> mxc: no, I was leaving when you asked 3 hours ago
[01:14] <marcoceppi> I'm back now
[01:14] <marcoceppi> mxc: At this time there's no way to close sec group rules without doing so manually
[01:14] <mxc> hm
[01:15] <marcoceppi> You don't need to expose anything for units to talk to eachother in network, but by defautl we spawn ssh on port 22, and those other two ports are for the Juju API
[01:15] <mxc> if i close juju-amazon, will juju try to reopen it?
[01:15] <mxc> marcoceppi: my general plan was to run the juju commands from a machine that requires 2FA to ssh into but is in the same security group as the juju worker instances
[01:16] <marcoceppi> mxc: it shouldn't, afaik it won't actively monitor rules, it simply opens or closes them on your command
[01:16] <sarnold> since security group features vary so much from provider to provider it might be worth looking to providing your own host-based firewalling via a subordinate charm. yet another item on my too-long todo list :)
[01:16] <mxc> sarnold: i've been thinking of writing a UFW charm too
[01:16] <marcoceppi> sarnold: there was some discussion about moving to a machine based firewaller, not sure how far that's gone
[01:16] <sarnold> mxc: that'd be my first attempt at it, too, hehe :)
[01:17] <sarnold> marcoceppi: I don't know about _moving_ to it, it feels to me like security group and host-based are orthogonal or, at least, nicely belts-and-suspenders
[01:26] <mxc> lazypower: so I'm not sure that a subordinate service makes sense for MMS
[01:26] <mxc> since MMS can be on separate machines and a single MMS agent can monitor multiple mongo instances
[01:27] <lazypower> But does it make sense to by default deploy the monitoring agent to its own machine?
[01:27] <mxc> not necessarily
[01:28] <mxc> if the mongo host dies, it may be better to have MMS on a separate machine to report said untimely death
[01:28] <lazypower> once the MMS agent connection is severed it instantly goes into panic mode and dispatches notices via your configured channels.
[01:29] <lazypower> But you bring up an interesting point. If you're running a cluster, and the host housing MMS dies, then you've effectively lost monitoring on teh other machines that are still active.
[01:42] <marcoceppi> lazypower: can you run mms on multiple mongo machines?
[01:42] <marcoceppi> rather, mutliple instances of mms?
[01:45] <lazypower> You can. there's no limit to how many agents you deploy
[01:59] <mxc> but, MMS is a bit odd in that you can only set the hosts that it monitors in the web console
[01:59] <mxc> so, my plan is to instruct users to add a host with some internal name, like juju-mongo.internal
[01:59] <mxc> and then add it to the config
[02:00] <mxc> and when have the database-relationship-changed hook set the IP address in the /etc/hosts file
[02:04] <mxc> lazypower: marcoceppi: if you're interested, here's the repo: https://github.com/docmunch/mongodb-management-service-charm
[02:04] <mxc> i should get it done tomorrow
[02:05] <marcoceppi> mxc: you can have the charm do that, if it's data you can post to a webform
[02:06] <mxc> its complicated though, you'd have to log into the site, post the data etc
[02:06] <mxc> i have a hard jan 1 deadline of getting everything deployed, so for now, i'm going to stick with my /etc/hosts hack
[02:07] <marcoceppi> mxc: that works, always room to iterate
[02:08] <mxc> yup
[07:47] <sekjun9878> Hello
[07:48] <sekjun9878> Does anyone know why this is happening, and how to fix?: "ERROR juju.state open.go:93 TLS handshake failed: x509: certificate signed by unknown authority"
[07:53] <sekjun9878> I have tried deleting the environments folder, to no avail.
[12:44] <ashipika> hi.. is it possible for different services to have relations as peers?
[14:04] <benji> rick_h__: do you know where the prod .ini file for charmworld is stored?
[14:08] <benji> hey bac
[14:08] <benji> pfft
[14:12] <andre118> Hi, I was here yesterday but meanwhile my internet connection broke...
[14:12] <andre118> I cannot deploy using juju
[14:12] <andre118> I can bootstrap a node on a MaaS infrastructure but it doesn't deploy any tools on it
[14:13] <andre118> can anybody help me out please?
[14:14] <rick_h__> benji: otp
[14:22] <andre118> Juju does not show anything wrong using --debug flag while bootstrapping
[14:23] <andre118> it downloads and uploads (to the bootstrap instance i guess) several tools but doesn't seem to install anything
[14:27] <andre118> if someone had the generosity to help me I would be very thankful :)
[14:51] <bloodearnest> any examples in current charms of cronjob setup? In particular the script run by cron would need access to some of the charm's config
[14:52] <bloodearnest> I guess config-changed could update the crontab to pass some params to the script, but this is tricky in my case as some of the params can be arbitrarily large
[14:53] <bloodearnest> So, maybe the config-changed needs to update both the crontab and the script
[14:55] <bloodearnest> Or can I access the charm's config from the cron-executed script? I don't think I can...
[15:07] <rick_h__> benji: did you find it? I think it's just in the charm
[15:07] <rick_h__> benji: well, it's generated from the charm.
[15:07] <rick_h__> using the overrides
[15:22] <bloodearnest> lemme rephrase my question: can I use config-get outside of a hook?
[15:24] <andre118> can someone help me with juju deployment?
[15:49] <andre118> juju isn't installing any tools on the bootstrap node
[15:50] <andre118> althought the MaaS controller allocates the node
[15:50] <andre118> what could be wrong?
[15:59] <marcoceppi> mgz: andre118 isn't getting userdata from juju to cloud-init in maas, where should we debug?
[16:11] <andre118> after sync-tools, this is the output I get running juju bootstrap --show-log --debug: http://paste.ubuntu.com/6567396/
[16:11] <andre118> nothing wrong on it
[16:12] <andre118> is it possible that juju is telling MaaS to bring up a machine, but cannot access it afetwards
[16:12] <andre118> *afterwards
[16:13] <andre118> maybe a missconfiguration with the ssh keys?
[16:13] <marcoceppi> andre118: juju talks to instances via userdata that cloud-init consumes. That userdata has things like the SSH key, what packages to install, etc
[16:14] <marcoceppi> andre118: for some reason that isn't gettting to the instances, I've pinged some of the developers to see if they can help you debug further
[16:19] <jcastro> marcoceppi, for fixing charm proof problems, like missing default values
[16:19] <jcastro> as a general rule, do I just set defaults as blank?
[16:19] <jcastro> for example in mediawiki it wants a default logo and default admin list
[16:19] <marcoceppi>  jcastro that's a good general rule, but it still needs to be tested
[16:19] <andre118> that's strange, as cloud_init log doesn't report anything wrong too, nor anything related to juju.
[16:19] <andre118> ok, thank you
[16:20] <marcoceppi> andre118: but it doesn't show that it's running the user data, the log should be a lot longer and very verbose
[16:20] <jcastro> marcoceppi, ok so I can test by ... deploying and trying to set those configs?
[16:20] <marcoceppi> jcastro: just deploy and make use it doesn't error our and actually worked as it did before you made the change
[16:20]  * jcastro nods
[16:21] <jcastro> my goal is to get all the ones charm proofed and at least bugs filed before you get to them
[16:21] <marcoceppi> jcastro: <3
[16:28] <jcastro> marcoceppi, what am I looking for in the deploy --debug to ensure the charm is being deployed from my disk and not the store?
[16:28] <jcastro> or does the local:mediawiki take care of that for me?
[16:43] <marcoceppi> local: takes care of that
[16:44] <dpb1> marcoceppi: did you get anywhere on that amulet hotfix?
[16:45] <marcoceppi> dpb1: yes, I'm preparing the 1.1 release atm
[16:45]  * dpb1 nods
[16:55] <marcoceppi> jcastro: did you change the maintainer for Ubuntu charm?
[16:57] <jcastro> for which one?
[16:57] <jcastro> marcoceppi, my local provider isn't destroying containers
[16:58] <jcastro> it's leaving them behind again
[16:58] <jcastro> I thought we fixed this?
[16:58] <marcoceppi> jcastro: yeah, there's a bug, webbrandon ran it on this yesterday
[16:59] <jcastro> ugh
[16:59] <jcastro> seriouslhy
[16:59] <jcastro> this totally breaks local testing for me
[17:00] <marcoceppi> jcastro: hum, he said he was going to open a bug, I don't see one though
[17:01] <jcastro> I saw sinzui reopened the original bug
[17:01] <marcoceppi> oatman had this problem too, but he was able to just delete the /var/lib/lxc/* stuff and it chugged a long
[17:01] <marcoceppi> it might be a race condition
[17:03] <lazypower> In several ways I'm sorry to see this, but happy to know its not something i did. My local containers were affected by this bug too when they were left in an error state, and resolved.
[17:04] <lazypower> the containers refused to be destroyed unless i wiped the entire environment.
[17:05] <noodles775> marcoceppi: Hi. I'm trying to deploy a charm with amulet, but get "No such file or directory: 'precise/relation-sentry/metadata.yaml'". Any tips for a work-around? (that's on trunk)
[17:05] <noodles775> http://paste.ubuntu.com/6567514/
[17:07] <marcoceppi> noodles775: if you branched from lp, LP is no longer the source of trunk
[17:07] <marcoceppi> noodles775: branch from github, https://github.com/marcoceppi/amulet
[17:08] <noodles775> k, yes I did branch from LP. I'll try from github. Thanks.
[17:09] <marcoceppi> noodles775: thanks, I'll move the project under juju/amulet in due time, but marcoceppi will contintinue to mirror the source
[17:12] <noodles775> marcoceppi: ok, that gives me a different error :P. I'll check back next week. http://paste.ubuntu.com/6567698/
[17:12] <noodles775> hrm, hangon, let me force python3
[17:13] <marcoceppi> noodles775: yeah, python3 only :)
[17:15] <noodles775> marcoceppi: so that gets me back to my original error: http://paste.ubuntu.com/6567724/ :/
[17:16] <marcoceppi> noodles775: I can't seem to replicate that
[17:17] <marcoceppi> noodles775: I noticed that it's installed in python2.7
[17:17] <noodles775> marcoceppi: what's installed in python2.7? juju-deployer?
[17:18]  * noodles775 checks for a python3-juju-deployer
[17:18] <marcoceppi> noodles775: oh, sorry
[17:18] <marcoceppi> noodles775: you're right
[17:18] <marcoceppi> noodles775: we're shelling out to deployer atm
[17:19] <marcoceppi> noodles775: do you have latest juju-deployer from ppa:juju/stable?
[17:24] <noodles775> marcoceppi: the ppa:juju/stable version of juju-deployer isn't newer than the saucy archive one is it? http://paste.ubuntu.com/6567774/
[17:27] <noodles775> Actually, there is no saucy version of juju-deployer in the ppa. The ~ubuntu13.04.1~juju1 version is apparently Raring.
[17:27] <noodles775> https://launchpad.net/~juju/+archive/stable/+packages?field.name_filter=deployer&field.status_filter=published&field.series_filter=
[17:27] <noodles775> er, right. There's no 13.10 one there :) (13.04 is raring)
[17:28] <marcoceppi> noodles775: i think deployer is sync'd directly to saucy
[17:28] <marcoceppi> so you should have latest
[17:28] <noodles775> Yep, as per the paste above 0.2.5-0ubuntu1
[17:29] <marcoceppi> noodles775: can you pastebin /tmp/amulet-juju-deployer-7pyrwu.json
[17:53] <jcastro> hey marcoceppi
[17:53] <marcoceppi> hey jcastro
[17:53] <jcastro> the mediawiki is missing a start hook
[17:53] <jcastro> is this something that I can just make a blank file or is there something it should be doing for start?
[17:53] <jcastro> "I've worked so long without a start hook!"
[17:54] <marcoceppi> jcastro: yeah, because apache is started during installed, and restarted every other time. Is it a W or an I?
[17:55] <marcoceppi> technically all hooks are optional, so they should all be INFO not WARN
[17:56] <jcastro> it's a W
[17:56] <jcastro> should I file a tools bug?
[17:57] <marcoceppi> jcastro: yes, it's part of a bigger bug though
[17:58] <marcoceppi> jcastro: https://bugs.launchpad.net/charm-tools/+bug/1172458
[17:58] <_mup_> Bug #1172458: proof does not distinguish between poorly-written and unusable charms <charmbrowser> <Juju Charm Tools:Triaged by marcoceppi> <https://launchpad.net/bugs/1172458>
[17:58] <marcoceppi> can you just update that bug with this information
[17:58] <marcoceppi> jcastro: I'm going to get that fixed in 1.2.5
[17:58] <jcastro> ack
[18:07] <fcorrea> hey there. Anyone here using the postgresql charm with an attached volume? A nova volume in this case
[18:09] <fcorrea> As far as I could trace it, the charm fails to migrate data from the local storage to the attached nova volume storage because it can't stop the postgres service
[18:20] <marcoceppi> fcorrea: do you have logs from /var/log/juju/unit-postgresql-*.log ?
[18:37] <noodles775> marcoceppi: sure - http://paste.ubuntu.com/6568142/
[18:38] <marcoceppi> noodles775: there should be a /tmp/amulet* directory (actually probably a few, since they don't get cleaned up
[18:38] <marcoceppi> ...yet
[18:41] <noodles775> marcoceppi: plenty of them, yes, but it's the /tmp/sentry_* you'd want to see isn't it? It doesn't have the metadata.yaml, as per the error: http://paste.ubuntu.com/6568157/
[18:41] <noodles775> Or is there something else that might be useful?
[18:41] <marcoceppi> noodles775: well, that explains that, but I can't duplicate that on this end :\
[18:42] <marcoceppi> noodles775: something else might be silently failing
[18:42] <noodles775> marcoceppi: OK, it may be worth trying on a new instance. Otherwise, I'll take a closer look on Monday and send you a pull request :)
[18:42] <marcoceppi> noodles775: I'm pushing up a few more things for 1.1 and building an apt package for it tonight
[18:43] <marcoceppi> noodles775: I'd recommend running the packaged version first instead of trunk.
[18:43] <noodles775> marcoceppi: heh, well I only switched to trunk because the packaged version didn't work. But I'll try the new one (of if it'd help, I can install the current one again and show the error).
[18:44] <marcoceppi> noodles775: ah! well I want to make sure packaged works for sure. Please ping me if you run into anything
[18:44] <fcorrea> marcoceppi, I might have as soon as I finish re-deploying my stack :)
[18:50] <noodles775> marcoceppi: will do. fwiw, here's the (same) error using the current package http://paste.ubuntu.com/6568199/
[18:50] <marcoceppi> noodles775: thanks, I'll spin up a VM, my workstation is pretty dirty
[18:52] <marcoceppi> noodles775: I'm running in to other problems now anyways :\
[18:57] <xnox> i am having trouble with juju-gui =/
[18:58] <marcoceppi> xnox: what trouble exactly?
[18:59] <xnox> i'm trying to add: cs:~cjwatson/quantal/wanna-build and cs:~cjwatson/quantal/sbuild/cross charms to canvas.
[18:59] <xnox> but i fail to get two boxes, up instead both of them are wanna-build charms, and not wanna-build & sbuild
[19:00] <fcorrea> marcoceppi, from the very beginning up to the failure: http://paste.ubuntu.com/6568257/
[19:01] <marcoceppi> fcorrea: does the volume appear mounted?
[19:01] <marcoceppi> xnox: are you deploying those with the gui?
[19:01] <fcorrea> marcoceppi, yup. It even have some postgresql structure in there
[19:02] <xnox> marcoceppi: well, it needs postgres as well, so yeah. I was going to make a bundle and then launch it on private openstack.
[19:02] <fcorrea> marcoceppi, http://paste.ubuntu.com/6568278/
[19:02] <marcoceppi> xnox: are you using the jujucharms.com sandbox?
[19:02] <xnox> marcoceppi: but without being able to add three charms, join them, and package as a bundle i'm not sure how well this will play out.
[19:03] <xnox> marcoceppi: yes, i'm on that web-site.
[19:03] <xnox> in the build tab.
[19:03] <marcoceppi> xnox: have you tried clearing cache and reloading the page? If so I'll grab someone from the GUI team to help get debug information from you
[19:04] <xnox> marcoceppi: actually i think something weird with the charm.
[19:04] <marcoceppi> xnox: oh?
[19:04] <xnox> marcoceppi: when i select sbuild charm it redirects into a wanna-build charm.
[19:05] <marcoceppi> xnox: I don't see cs:~cjwatson/quantal/sbuild in the gui
[19:05] <xnox> marcoceppi: let me fork the charms and check what's going on.
[19:05] <marcoceppi> fcorrea: unfortuantely the log doesn't say where exactly it failed, let me look at the hooks
[19:05] <xnox> marcoceppi: yeah, why isn't it?
[19:06] <marcoceppi> xnox: no idea, is the branch name lp:~cjwatson/charms/quantal/sbuild/cross or lp:~cjwatson/charms/quantal/sbuild/trunk ?
[19:06] <fcorrea> marcoceppi, exactly. While debugging the config-changed hook, I could check that if fails while trying to stop the server so the data can be migrated
[19:07] <marcoceppi> fcorrea: but the output of that log shows it stopped successfully
[19:07] <fcorrea> marcoceppi, for some reason, I can't stop the server with "sudo service postgresql stop" for example
[19:07] <marcoceppi> which is odd
[19:07] <xnox> marcoceppi: /trunk does not exist, /cross is the right one.
[19:07] <marcoceppi> xnox: I think charmworld only imports charms that end in /trunk let me find someone from that team to confirm
[19:08] <xnox> marcoceppi: also there doesn't seem to be saucy series for charms =(
[19:08] <fcorrea> marcoceppi, right, which I guess is misleading it was still up. And now there only way to stop the service is by killing it
[19:08] <fcorrea> marcoceppi, lemme check the upstart logs
[19:08] <marcoceppi> fcorrea: cool
[19:09] <marcoceppi> xnox: you should be able to push to lp:~user/charms/saucy/charm/trunk
[19:09] <marcoceppi> I just dont' think we have any saucy charms
[19:09] <marcoceppi> xnox: we tend to recommend people write charms for LTS
[19:09] <marcoceppi> so the majority are in precise
[19:09] <xnox> marcoceppi: bzr push lp:~xnox/charms/saucy/sbuild/trunk
[19:09] <xnox> bzr: ERROR: Permission denied: "~xnox/charms/saucy/sbuild/trunk/": : No such distribution series: 'saucy'.
[19:10] <xnox> And here: https://launchpad.net/charms/+series
[19:10] <xnox> there is no saucy series.
[19:10] <marcoceppi> xnox: let me add that series
[19:11] <fcorrea> marcoceppi, nothing in the logs. I'm wondering if the upstart job is getting confused with something data that could have changed while moving the data out to the nova volume and it fails when it tries to kill it
[19:11] <marcoceppi> jcsackett: does charmworld import personal branches that don't end in /trunk? IE: lp:~cjwatson/charms/quantal/sbuild/cross ?
[19:11] <xnox> marcoceppi: ok, make sure you don't break ubuntu series though ;-)
[19:13] <fcorrea> marcoceppi, it's easy to replicate though. Just deploy postgres, attach the nova volume, juju set postgresql volume-map="{postgresql/0: vol-000001}", juju set postgresql volume-ephemeral-storage=False, juju resolved --retry postgresql/0
[19:13] <fcorrea> marcoceppi, after that you should end up with a die hard postgresql server
[19:13] <fcorrea> impossible to stop it
[19:13] <marcoceppi> why are you running resolved --retry ?
[19:14] <fcorrea> marcoceppi, the unit goes into an error state if I change the volume-map and the volume-ephemeral-storage is still True
[19:15] <marcoceppi> fcorrea: ah, well I can help you there
[19:15] <fcorrea> marcoceppi, then when I set it to False, I retry it but then it goes into error state on config-changed because of the problems described above
[19:15] <marcoceppi> juju set postgresql volume-map="{postgresql/0: vol-000001}" volume-ephemeral-storage=False
[19:16] <marcoceppi> fcorrea: you can send multiple key/val pairs at once
[19:16] <fcorrea> marcoceppi, ahh
[19:16] <marcoceppi> maybe that's why it's in this weird locked state?
[19:16] <fcorrea> well, I'll find out in a sec
[19:16] <marcoceppi> fcorrea: cool, if so, file a bug against the charm, there should be some safe guards in place to prevent this
[19:17] <jose> marcoceppi: hey, I'm trying to work on the postfix charm now
[19:17] <fcorrea> marcoceppi, will do
[19:18] <marcoceppi> jose: awesome
[19:19] <marcoceppi> fcorrea: if it doesnt' work still, stub, the author, might be able to help
[19:19] <fcorrea> marcoceppi, awesome. will ping him for sure if I still have issues with it
[19:20] <fcorrea> marcoceppi, thanks btw :)
[19:21] <xnox> marcoceppi: after pushing a new branch, how long does it take to appear in jujucharms / manage.jujucharms.com ? i pushed lp:~xnox/charms/trusty/wanna-build/trunk lp:~xnox/charms/trusty/sbuild/trunk but http://manage.jujucharms.com/~xnox is empty
[19:22] <marcoceppi> xnox: every 15 min on the nose
[19:22] <marcoceppi> xnox: so in about 8 mins
[19:28] <jose> marcoceppi: hey, in my charm, what are cacert.pem and cakey.pem?
[19:29] <marcoceppi> jose: you tell the user to generate them as part of the instrucitons, they're the SSL certs
[19:29] <jose> can they be transmitted as base64 strings too?
[19:31] <xnox> marcoceppi: are trusty charms imported at all?
[19:31] <marcoceppi> jose: yeah
[19:31] <marcoceppi> xnox: I hope so.. jcsackett^?
[19:32]  * xnox is failing to find any trusty charms
[19:32] <jose> xnox: hey, note that your name as displayed on your IRC info is still the old one :)
[19:32] <jose> (real name string)
[19:32] <xnox> jose: ha, should fix it. thanks.
[19:32] <sarnold> :)
[19:35] <xnox> jose: sarnold: not sure, i think my znc proxy must reconnect to update it. but i don't want to loose uptime =)
[19:35] <xnox> meh, will restart it on a sunday or something.
[19:36] <jose> :P
[19:36] <jose> marcoceppi: hey, do you know if there's a way to see if a script has been ran at least one time?
[19:37] <xnox> marcoceppi: looks like they are on manage.jujucharms.com now, but not in jujucharms.com
[19:37] <xnox> marcoceppi: clicking on any of the two on http://manage.jujucharms.com/~xnox ends up with 404
[19:40] <jcastro> marcoceppi, mediawiki done
[19:44] <jose> marcoceppi: fix pushed
[19:46] <marcoceppi> hey benji, so is trusty not in the GUI as a series yet?
[19:47] <marcoceppi> GUI/manage.jujucharms.com
[19:47] <fcorrea> marcoceppi, sending multiple key/values worked....like a charm. Thanks
[19:47] <benji> marcoceppi: that may be the issue
[19:48] <fcorrea> marcoceppi, it would be nice if those two config parameters could only work together so that when a user passes a single value, it noops. But anyways
[19:48] <marcoceppi> fcorrea: thanks, can you open a bug against postgresql that it should error out if one is set and not the other?
[19:48] <fcorrea> marcoceppi, heh, indeed
[19:48] <marcoceppi> fcorrea: https://bugs.launchpad.net/charms/+source/postgresql
[19:49] <marcoceppi> benji: when you're looking, could you make sure both trusty and saucy are valid GUI/manage.jujucharms.com series?
[19:52] <benji> marcoceppi: from a cursory investigation, it doesn't look like the GUI has any hard-coded series, so that shouldn't be the source of the issue
[19:53] <marcoceppi> benji: so the API has the trust charm, https://manage.jujucharms.com/api/3/charm/~xnox/trusty/sbuild
[19:54] <marcoceppi> xnox: oh, it's showing now. Maybe it was just a cached response: https://manage.jujucharms.com/~xnox/trusty/sbuild
[19:54] <marcoceppi> xnox: https://jujucharms.com/fullscreen/search/~xnox/trusty/sbuild-0/?text=sbuild
[19:54] <marcoceppi> benji: this might be a moot point then
[19:55] <xnox> execcelnt
[19:55] <xnox> marcoceppi: not sure why it's polling instead of doing push updates, it's not like charms are updated at hundreds per second.
[19:56] <marcoceppi> rick_h__: while you're here, can you confirm that charmworld won't ingest charms that don't end in /trunk ? IE: lp:~cjwatson/charms/quantal/sbuild/cross
[19:56] <rick_h__> marcoceppi: I believe that's the case
[19:56] <marcoceppi> rick_h__: thanks
[19:56] <marcoceppi> xnox: LaunchPad doesn't have push notifications
[19:56] <xnox> marcoceppi: oh, right.
[19:57] <xnox> marcoceppi: well there are branch rss feeds, but no notifications of new branches indeed.
[19:59] <xnox> marcoceppi: is there a way to "save" config changes without deploying?
[19:59] <xnox> or is deploy, actually "save changes"
[19:59]  * xnox tries.
[19:59] <marcoceppi> xnox: in the GUI? yeah, saving will deploy and save
[19:59] <marcoceppi> it's all async'y
[20:00] <xnox> deploy where?
[20:00]  * xnox is using website without any providers.....
[20:01] <marcoceppi> xnox: to the GUI sandbox
[20:01] <marcoceppi> it creates a mock provider to allow you to stage stuff as if it were an environment
[20:02] <xnox> ah, i see.
[20:04] <rick_h__> marcoceppi: xnox it's there, just took a few min for ingestion to get it http://comingsoon.jujucharms.com/sidebar/search/?text=sbuild
[20:05] <marcoceppi> rick_h__: ta!
[20:05] <rick_h__> we just got impatient on it, didn't realize it was so new
[20:06] <xnox> marcoceppi: i think i managed to add postgres, wann-build and sbuild and connect them together.
[20:06] <marcoceppi> xnox: \o/ see, wasn't that hard ;)
[20:06] <xnox> marcoceppi: when connecting wanna-build with postgresql it did ask me if I want "db - db-admin" or "db-admin - db-admin" relationship.
[20:07] <xnox> marcoceppi: now, i'm not sure what that means. i've picked the later one.
[20:07] <xnox> now i guess all I need to do is launch that bundle into a provider \o/
[20:07] <marcoceppi> xnox: that means there are multiple relations that are supported between those two charms
[20:07] <marcoceppi> typically the readme will illuminate which you may want
[20:09] <xnox> well, wanna-build requires "db-admin: interface: pgsql" so i've picked db-admin/db-admin
[20:09] <xnox> ah both db & db-admin use smae interface - pgsql
[20:10] <xnox> right, makes sense that both are offered.
[20:10] <marcoceppi> xnox: yeah, you made the right choice
[20:10] <xnox> marcoceppi: bundle exported =) can I commit it into a branch or something?
[20:13] <marcoceppi> xnox: yes! https://juju.ubuntu.com/docs/charms-bundles.html#creating
[20:19] <xnox> coool. now back to making another cup of tea to wait for that to propagate =)
[20:58] <jcastro> marcoceppi, can you vote to dupe this please: http://askubuntu.com/questions/368783/did-not-find-expected-hexadecimal-number-when-bootstraping-juju-on-windows
[21:00] <xnox> what's the difference between deployer and quickstart?
[21:00] <jcastro> quickstart is the easy to use frontend that calls deployer
[21:01] <xnox> ... then why are they in two different packages from two different PPAs?
[21:01] <jcastro> because both are still in progress in beta with no real releases yet
[21:02] <marcoceppi> xnox: because they're developed as too seperate tools
[21:02] <jcastro> though they should both be in ppa:juju
[21:02] <hatch> quickstart also deploys the gui, opens it and logs in
[21:02] <marcoceppi> xnox: right quickstart is more...user oriented. deployer is this awesome powerhouse tool that a lot of juju/charm tools tap in to
[21:02] <jcastro> marcoceppi, I am out of AU votes, I am flagging all the old abandoned pyju questions, can you make sure they get actioned?
[21:03] <marcoceppi> jcastro: yeah
[21:03] <jcastro> as a user I hope to never have to use deployer, or even see it
[21:03] <xnox> hatch: marcoceppi: jcastro: if jujucharms.com clearly gives me instructions to use it, you have users, so you can't pretend it's "in-development" or "beta" or whatever.
[21:03] <marcoceppi> yeah, it's too featurful for general user consumption imo
[21:03] <xnox> please make them all available from one ppa, from one package.
[21:03] <marcoceppi> xnox: I've filed bugs about this, I wish they would stop referring to it
[21:03] <jcastro> xnox, you are preaching to the choir. :)
[21:03] <xnox> (e.g. juju-bundle-tools which depends on both)
[21:04] <marcoceppi> xnox: well quickstart will be in ppa:juju/stable when it's stable, deployer is already in there (and in distro)
[21:04] <hatch> xnox just because something is public doesn't mean that it's not beta :)
[21:04] <marcoceppi> quickstart package should just depend on juju-deployer
[21:04] <jcastro> xnox, mid/early -januaryish is when quickstart should be in the ppa
[21:04]  * marcoceppi goes to the gui instructions
[21:04] <jcastro> and closer to being finished
[21:05] <marcoceppi> xnox: well, to be honest, it shows both ways
[21:05] <marcoceppi> xnox: it doesn't say you have to install both
[21:05]  * marcoceppi complients the gui team on cleaning those instructions up
[21:06] <xnox> marcoceppi: but also failed to say that it is available on my release.
[21:06] <marcoceppi> xnox: ?
[21:07] <xnox> marcoceppi: i don't need any ppa's, i can just $ apt-get install juju-deployer.
[21:07] <xnox> which i found out from this channel =)
[21:07] <marcoceppi> xnox: well, we can't know that you're on saucy
[21:07] <marcoceppi> it's only in saucy+, for everyone else you need ppa
[21:09] <xnox> right, i guess Precise is the better assumption still.
[21:10] <xnox> local provider is charming =)
[21:10] <xnox> http://paste.ubuntu.com/6568920/
[21:11] <xnox> i guess i should be root when deploying? same as for local bootstrap?
[21:12] <xnox> i think the trusty should be quiried for "daily" images, not "released"
[21:12]  * xnox ponders where/how to set the cloud image chaanel.
[21:15] <benji> l8r bac
[21:26] <jcastro> marcoceppi, ok I've flagged almost every single one, if you close those and vote today all our questions should be up to date
[21:26] <jcastro> xnox, you only need root to bootstrap
[21:26] <jcastro> everything else do as non-root