#juju 2012-03-19
<lifeless> SpamapS: hazmat: ^ ?
<hazmat> lifeless, no
<lifeless> thanks
<hazmat> lifeless, the charm/service unit does have a concrete lifecycle though which includes an install phase
<hazmat> and subordinate charms will allow installation into  existing units..
<lifeless> hazmat: concrete use case - I want to configure egress firewall rules both on the node and on the network gateway when the node is brought up as a juju node; that is arguably a cloud API, but no clouds I know of (today) support egress fw rules
<hazmat> lifeless, so two thoughts.. you can manage a firewall within a charm (although as far as local providers with network ns.. i'm not sure about the interaction).. i believe SpamapS had the memcache charm using fw rules to only allow access to related service units.
<lifeless> right, thats the reverse :)
<lifeless> this is egress rules, not ingress
<lifeless> allowing nodes to only start communications that they are meant to be able to.
<hazmat> ah
<hazmat> same principle applies though
<hazmat> in terms of modeling it with a relation, and manipulating the firewall by related units.
<lifeless> right, though for the on-node rules you need zk to supply its own needed ports.
<lifeless> unless you mode the zk connection as a relation? Seems a bit black-hole bootstrap to me though.
<hazmat> yeah.. you have to establish a default for the zk node access for juju to run
<lifeless> s/mode/model/
<hazmat> but you can do that in the charm installation
<hazmat> install hook
<lifeless> hazmat: do we have multiple charms on one node yet ?
<lifeless> or will we have to hack all the charms we use ?
<hazmat> lifeless, not with isolation, but the subordinate work is testable now from a branch
<lifeless> hazmat: well, this is about *nodes* - so wouldn't isolation be a problem ?
<hazmat> true, although i'm curious about the interaction of lxc with firewall manipulation, if it properly only applies to the container network ns.
<lifeless> if you're creating N networks one per container then yes
<lifeless> you get a loopback network, bridged internally to the host + NAT rules for outbound stuff
<lifeless> lxc is leaky in lots of other ways though :)
<hazmat> indeed
<_mup_> txzookeeper/managed-watch-and-ephemeral r48 committed by kapil.foss@gmail.com
<_mup_> expiration handler integrated with retry client
<d0od_> Yay I am a frankend0od
<marcoceppi> d0od: the new OMG charm should be ready today. Which should take care of all the issues we've seen over the past two days.
<d0od> \o/
<d0od> Is the site on Smalls now?
<marcoceppi> no, it's still on larges
<d0od> Ouch!
<d0od> $_$
<marcoceppi> won't be moved to smalls until the new charm is ready
<d0od> Okay
<d0od> Who do I owe money to so far, you or Jorge?
<marcoceppi> Talk to Jorge about that
<d0od> Ok
<niemeyer> Morning everybody
<marcoceppi> d0od: everything still loading "quickly" for you?
<imbrandon> heya jcastro /me returns
<jcastro> imbrandon, meet marcoceppi
<marcoceppi> o/ imbrandon
 * imbrandon waves
<jcastro> imbrandon, ok so we got him back up but in the charm
<jcastro> we want to "do it right"
<jcastro> so marco's pushing up his work
<imbrandon> right on
<jcastro> and we're having some php/caching issues (marco will explain)
<imbrandon> ok so marco is ... ? owner ?
<jcastro> so we'd figured you can have a look, and then do some ninja
<imbrandon> ahh php cache issues are my thing :)
<marcoceppi> imbrandon:  just a charmer
<imbrandon> ahh cool ok :)
<jcastro> no, he's just another everyday hero. :)
<jcastro> imbrandon, the idea is to charm up the most slick way to run OMG
<imbrandon> <-- is just a php bender
<jcastro> and then deploy it
<jcastro> and then point the elastic IP to the new stuff
<imbrandon> jcastro: point the eip to the elb and have the elb to the charm
<imbrandon> right ?
<jcastro> right now the site is running fine, but I made a mistake and deployed on xlarges, so the only real rush is my wallet. :)
<jcastro> we haven't done an elb yet.
<jcastro> in fact, we'll have to do this twice, once on my account to make sure it's sweet.
<marcoceppi> imbrandon: we were using APC + mod_php5 but a bug with the php5 cron and session+fuser was causing some weird child process issues. So I moved them to php-cgi and suPHP since that's what I use on my shared servers and know most intimately. However, suPHP and APC don't play nice. For this charm, suPHP is overkill + 1, fastcgi would be fine but that won't play well with APC either. So I just started
<marcoceppi>  looking in to php-fpm then you arrived. Tag, you're it :)
<imbrandon> downsizing should be ok, we up and down size boxen all day :) i think our "Drops" are at about 350 instances constantly :)
<jcastro> and then redeploy to joey's account when we're ready to hand it over to him
<jcastro> imbrandon, but the idea is to capture all the tweaks in the charm
<imbrandon> jcastro: yup, we call those Drops , play on the drupal name, but they contain all the special sauce and can run on any number of instances
<imbrandon> as one system
<imbrandon> marcoceppi: definatly php-fpm with nginx , actually 2x nginx instances runing
<imbrandon> on the same box
<imbrandon> is the way to go
<imbrandon> with APC , and memcached behind that, then finaly the DB
<marcoceppi> 2 nginx services running the same box bound to port 80? or two services, same box, different ports, with a proxy in front?
<imbrandon> 2 ports, php-fpm on a unix socket, and then 8080 for nginx that talks to the php, then the nginx on 80 runinng a microcache AKA varnish killer
<marcoceppi> object caching would probably be overkill for this, it's the wp+plugin generation that's killing us, the db is happy as a clam
<imbrandon> ahhh thats easy thats what te first one is for
<imbrandon> and yea we can drop the memcache then
<imbrandon> no worries there
<marcoceppi> *nod*
<imbrandon> basicly the nginx on 80 will cache things for 800ms, none the wiser BUT
<imbrandon> that means php will only be hit 1 request per second MAC
<imbrandon> MAX,
<imbrandon> no matter hoiw many visits you get
<imbrandon> and no one knows if their content is 800ms old
<imbrandon> nice? nice .... :)
<marcoceppi> imbrandon: I'm in
<marcoceppi> the current charm is a few bits behind, it's setup for suPHP at the moment
<imbrandon> ok so this on a 11.10 install ? sorry but we dont use juju so i'm a little blimd to that aspect
<imbrandon> no worries i can wrangle it if you want
<imbrandon> setup of a clean nginx + php-fpm 5.3.10 + apc and other extenstions is the easy part
<imbrandon> :)
<imbrandon> btw you want igbinary in addition to apc
<imbrandon> for binary searialization of sessions and apc data cache
<marcoceppi> imbrandon: charms are just bash scripts
<imbrandon> but thats an easy "pecl i igbinary"
<marcoceppi> well, these are just bash scripts, charms can be coded in any language
<jcastro> (or in any language you want!)
<imbrandon> marcoceppi: ahh ok, then i;m down, they are probably very similar to our drops
<jcastro> hey can we move to #juju-wordpress for now?
<imbrandon> the drop scripts are in python and spin up down instances and control tons of things
<imbrandon> sure
<jcastro> we can go into more detail there without flooding this channel
<jcastro> rock
<m_3> ec2 is nice and snappy this time of day
<m_3> marcoceppi imbrandon jcastro nice job on omgubuntu!!
<marcoceppi> m_3: <3
<SpamapS> imbrandon: note that pecl is vulnerable to man-in-the-middle attacks
<SpamapS> marcoceppi: that php cron bug is fixed in precise btw
<marcoceppi> SpamapS: whew, good
<SpamapS> Probably worth SRU to 11.10
<marcoceppi> So, I'm trying to retroactively open a port, jumped in to debug hooks, ran open-port 8080/tcp
<marcoceppi> getting an error about juju_agent_socket
<gmb> SpamapS, Just a quick ping to let you know that I've updated https://code.launchpad.net/~gmb/charm-tools/add-charm-helpers/+merge/96204 with tests; python-shell-toolbox is available in ppa:yellow/ppa, too.
<marcoceppi> how would I determine the "client-id" of an instance?
<jcastro> SpamapS, hazmat ^
<hazmat> marcoceppi, you mean the instance id of the machine? or the zk 'client-id' for an agent?
<marcoceppi> not sure, whichever open-port is expecting: No JUJU_CLIENT_ID/--client_id option found
<marcoceppi> I imagine it's zk 'client-id'
<hazmat> ah.. that's the hook cli api..
<hazmat> marcoceppi, so the first window of debug-hooks isn't actually a live environment.. live hook windows will pop in the session as hooks are executed
<marcoceppi> ah, and if all the hooks have been executed already?
<hazmat> marcoceppi, the hook windows have all the nesc. env variables set to use the hook cli api.
<hazmat> marcoceppi, its not live then.. there's no remote api context attached to the socket.. the value for the cli client id is 'constant'
<marcoceppi> Maybe I'm attaching to the wrong socket then.
<marcoceppi> rather, maybe I'm asking the wrong question.
<marcoceppi> Can you run commands like open-port without performing a charm upgrade?
<hazmat> marcoceppi, on a live instance outside of a hook, no.
<marcoceppi> damnit
<marcoceppi> ok
<hazmat> marcoceppi,  one off cli script to open a port on service from the client.. http://paste.ubuntu.com/890841/
<marcoceppi> Oh! this runs from the machine
<hazmat> marcoceppi, yeah. its run from the client not the env
<marcoceppi> example usage hazmat ?
<hazmat> marcoceppi, its got cli help.. python jport.py -h .. but an example python jport.py myblog 8080
<hazmat> you can target individual units or the whole service
<marcoceppi> that's what I'm doing wrong, thanks
<hazmat> or spec udp via  631/udp
<marcoceppi> I didn't notice it was missing the hashbang, so ./oneoff wasn't quite working
<ninjix> is there an api for driving juju with python?
<hazmat> ninjix, its written in python as a library
<hazmat> but there is no supported public api from that library
<ninjix> hazmat: tnx
<jcastro> negronjl, lynxman, jamespage: all normal ~charmers are smoked today, any of you want to do a review of the incoming openerp charm?
<negronjl> jcastro: ahh ... I see .. .so we are your Plan B right ?  :D
<ninjix> I'm looking for some documentation covering the workflow of juju's provisioning process when working with orchestra server.
<negronjl> jcastro:  I can review .... .give me a bug #
<jcastro> https://bugs.launchpad.net/charms/+bug/912050
<ninjix> Does juju try to completely reprovision instances via cobbler for each bootstrap?
<jcastro> https://bugs.launchpad.net/charms/+bug/956259
<jcastro> znc is available too if someone wants to snag it
<lynxman> jcastro: negronjl jumped the shark before I could ;)
<jcastro> lynxman, znc bro!
<lynxman> jcastro: no prob hermano ;)
<jcastro> also I don't know if someone reviewed negronjl's gearman stuff yet
<negronjl> jcastro:  the gearman stuff is not ready to be in the official store yet
<jcastro> ah ok
<negronjl> jcastro:  Do you have a filter so I can see what needs to be reviewed ( as I get a chance, I can look them over )
<jcastro> https://bugs.launchpad.net/charms/+bugs?field.tag=new-charm
<marcoceppi> https://bugs.launchpad.net/charms/+bugs?field.tag=new-charm
<marcoceppi> ninja'd
<jcastro> you want NEW and fix committed
<jcastro> negronjl, flip gearman to incomplete if it's not ready please
<negronjl> jcastro:  changed it to in progress.
<jcastro> rock
<SpamapS> gmb: *awesome*
<ninjix> what are the differences between orchestra-juju-available and orchestra-juju-acquired mgmt classes?
<jseutter> Question: When I try to deploy a service locally, I get "Invalid value for multicast_port: 34569".  Does anyone know what causes this?
<jseutter> I'm fairly certain it was working last week, before a reboot.
<SpamapS> ninjix: its all in the name. :) available means juju isn't using it but you want juju to use it if it needs a machine. acquired means juju has taken control.
<SpamapS> jseutter: update all your charms (if you did a 'charm getall' just do 'charm getall /path/to/charms')
<SpamapS> jseutter: a change landed about 2 weeks ago that is breaking on some older versions of charms.
<ninjix> SpamapS: got it. thanks
<ninjix> is Juju or Cobbler responsible for the machine start command?
<SpamapS> ninjix: juju will tell cobbler to power off/on the machine after acquiring it. If you don't have power control, you'll have to manually reboot it.
<ninjix> SpamapS: so it is up Cobbler to have the correct power mgmt script setup
<SpamapS> ninjix: the power control stuff in cobbler is a bit of black magic (no juju pun intended.. ;) .. so good luck. :)
<ninjix> :)
<SpamapS> ninjix: we're using it in our test lab constantly.
<ninjix> looks like the suck in bash scripts from /etc/cobbler/power through Cheetah
<ninjix> SpamapS: is juju/providers/orchestra the place I should be looking for cobbler control commands?
<SpamapS> ninjix: probably
<SpamapS> ninjix: __init__.py should have a MachineProvider.start_machine() method that will lead you to where the actual code is
<ninjix> SpamapS: ahh... thank you
<negronjl> jamespage: ping
<jamespage> negronjl, hey
 * jamespage reads scrollback
<negronjl> jamespage: bbcmicrocomputer tells me that you are interested in merging the tomcat6 and tomcat7 charms
<negronjl> jamespage: I can work that
<jamespage> negronjl, great - that would be fantastic
<jamespage> I last touched tomcat7 - I did some work on config management and managing in multicast environments
<negronjl> jamespage: I saw that.
<negronjl> jamespage:  I'll add some config options to ask which tomcat to deploy and will default to 7.
<jamespage> negronjl, great - that would be nice
<negronjl> jamespage: I'll keep you posted
<jamespage> I think the only diff I could spot was the format of tomcat-users.xmk
<negronjl> jamespage: shouldn't be that difficult
<jamespage> nah
<jamespage> easy peasy
<negronjl> jamespage:  cool.  I'll keep you posted
<jamespage> ta
<jamespage> bbcmicrocomputer, are you looking at the hive charm?
<bbcmicrocomputer> jamespage: yeah
<bbcmicrocomputer> jamespage: I'm somewhat slow atm, but I'm getting there
<jamespage> bbcmicrocomputer, cool - I'll assign the bug to you so its clear
<bbcmicrocomputer> jamespage: ok
<jamespage> bbcmicrocomputer, bug 803531
<_mup_> Bug #803531: Charm Needed: Hive <hot> <Juju Charms Collection:In Progress by robert-ayres> < https://launchpad.net/bugs/803531 >
<jamespage> jcastro thinks its hot
<bbcmicrocomputer> jamespage: ok, cool
<jamespage> negronjl, do we still need a pig charm?  I've added it as a config option to the new hbase and hadoop charms
<negronjl> jamespage: no, I don't think we do.
<jamespage> negronjl, OK - I'll close of the bug report then
<negronjl> jamespage: thx
<jcastro> jamespage, anything tagged "hot" is from the original list of "man this would be epic" list.
<jamespage> no pressure bbcmicrocomputer!
<negronjl> bbcmicrocomputer: no pressure... jcastro will just hound you every day about it :D
<jcastro> I am a loud and festive person
<bbcmicrocomputer> ha ha awesome
<negronjl> jcastro: you must be puertorrican ( loud ) :)
<bbcmicrocomputer> I'm just happy to see my name on my first bug
<jcastro> oh dude, you want more?
<bbcmicrocomputer> jcastro: ah...
<negronjl> bbcmicrocomputer: careful with what you ask xD
<jcastro> jamespage, since I have you here I have a fun request, how's your jetty? And do we have anything using it that serves pages?
<jamespage> jcastro, its OK - some of the hadoop stuff uses it - what are you looking todo?
<jamespage> embedded tho so non-obvious
<jamespage> jcastro, biab
<jcastro> jamespage, I think it'd be cute to have jetty with it's spdy module turned on serving something
<jcastro> but to browsers, not internal stuff nothing hits
<jcastro> robbiew, any updates on osx client? imbrandon's looking to charm but is on OSX
<robbiew> that guy went silent...so I don't see one coming for 12.04
<robbiew> we'll have one for 12.10
<SpamapS> 12.04/12.10 shouldn't matter to OSX users ;)
<SpamapS> They can just grab bzr and install from trunk.
 * SpamapS wonders if Cobbler would run on OSX
<marcoceppi> So, juju deploy, then juju ssh that machine and I get a permission deined.
<marcoceppi> I have ssh keys defined in my environment.yaml, and I can juju ssh older machines provisioned
<marcoceppi> SpamapS hazmat any opinions :?
<hazmat> odd
<hazmat> marcoceppi, can you pastebin the console output from one of the newer machines
<hazmat> ec2-get-console-output that is
<marcoceppi> jcastro: ^^ You've got the AWS account
<hazmat> jcastro, can you login into the machines with juju ssh
<jcastro> ok that wants a -k
<jcastro> not the new ones
<marcoceppi> hazmat: neither of us can
<marcoceppi> jcastro: weird, I can ssh in to the one you just desployed
<_mup_> juju/relation-hook-commands-spec r7 committed by jim.baker@canonical.com
<_mup_> Updated to use new relation id format, added better examples, and worked on some clarification
<hazmat> jcastro, for one of the problematic instances, getting the output of ec2-get-console-output $instance_id AND ec2-describe-instance-attribute --user-data $instance_id should be helpful
<jcastro> ok so I don't know what to put after the -k for those commands
<marcoceppi> -K is the aws_secret_key
<marcoceppi> -C is the .pem cert
<jcastro> ah ok
<marcoceppi> but we don't use .pem certs
<marcoceppi> so not sure about that
<marcoceppi> i-97fc86f3
<_mup_> juju/relation-info-command-spec r7 committed by jim.baker@canonical.com
<_mup_> Updated to use new relation id format and added back bug reference
<marcoceppi> that's a troublesome instance id
<jcastro> does juju put pems someplace?
<jcastro> I only have my AWS secret key
<jcastro> also, we could just redeploy a new environment right?
<marcoceppi> not sure, but more information. I just did a destroy-service for the stackmobile charm, got this:
<marcoceppi> 2012-03-19 14:48:19,365 ERROR no node
<marcoceppi> I wonder if I'm missing something on this computer
<SpamapS> jcastro: forget pems :)
<SpamapS> jcastro: I put my aws creds in shell files
<jcastro> yeah so, I have no idea how to use AWS without juju
<SpamapS> awesome :)
<jcastro> the concept of using AWS without juju was never an option for me, I am the new new devop. :p
<SpamapS> jcastro: access-key should be in EC2_ACCESS_KEY and secret-key in EC2_SECRET_KEY
<jcastro> (seriously using it via juju was my first introduction)
<jcastro> ok
<SpamapS> jcastro: also, use euca-* instead of ec2-*  .. they're written in python instead of java, so they start up about 2-3s faster. :)
<jcastro> SpamapS, and for EC2_URL?
<SpamapS> jcastro: "https://ec2.amazonaws.com"
<jcastro> ok all set
<jcastro> ok so now what I need to do to get hazmat what he needs
<SpamapS> jcastro: euca-get-console-output i-xxxxxx
 * SpamapS goes to lunch
<jcastro> hazmat, http://paste.ubuntu.com/891103/
<jcastro> hah man, what if it's that apt/ec2 mirror bug
 * jcastro assumes it stopped at the Get:
<hazmat> jcastro, that's not the whole console output
<jcastro> i've run it a few times and that's all I get
<marcoceppi> jcastro: I'm about to deploy to a new machine
<marcoceppi> if you wnant to try against that one
<SpamapS> jcastro: no it takes a few minutes
<SpamapS> jcastro: its very lazy in updating
<jseutter> SpamapS: Thanks.  I now get a different error: "Invalid value for force_https: False".  Any ideas?
<jcastro> ok
<SpamapS> jcastro: though I'd expect it to be up to date within 5 minutes
<SpamapS> jseutter: more charms that are out of date. :-/
<SpamapS> hazmat: ^^
<SpamapS> hazmat: any news on that fix?
<SpamapS> hazmat: I saw a merge proposal.. would it help if I review it?
<hazmat> SpamapS, definitely
<SpamapS> jseutter: getall might not have worked. If you have a recent version of charm-tools, try 'charm update --fix path/to/charms
<SpamapS> hazmat: ok, will do after lunch
<jseutter> SpamapS: nod.  Trying update without the --fix as my client must be too old for it
<SpamapS> jseutter: should be in the PPA
<jseutter> nope, no luck.
<SpamapS> hmm actually looks like the PPA build isn't picking it up
<jcastro> ok got it hazmat http://paste.ubuntu.com/891125/
<SpamapS> jcastro: halted?
<SpamapS> juju-machine-agent start/running, process 4432
<SpamapS> thats a normal start... hrm
<marcoceppi> Could it be something with my config?
<marcoceppi> If jcastro launches a service, I can ssh to it
<marcoceppi> If I deploy I can't and he can't
<SpamapS> diff your environments.yaml
<SpamapS> could be something out of sync there
<SpamapS> like, perhaps, ssh-authorized-keys ;)
<jcastro> do they need to be exactly the same? or just the specific stanza?
<SpamapS> I'd recommend them to be exactly the same at least for that one environment
<jcastro> hmm, but this was working fine
<jcastro> odd
<marcoceppi> jcastro: I am on a different machine now
<SpamapS> also same juju versions?
<marcoceppi> was on laptop, now at work desktop in the office
<jcastro> aha!
<jcastro> want me to resend it?
<marcoceppi> jcastro: I copied it from my laptop
<SpamapS> dpkg -l juju
<marcoceppi> FUUUUUUU
<marcoceppi> FUUUUUUU
<SpamapS> http://cdn.memegenerator.net/instances/400x/16609403.jpg
<jcastro> hahaha
<marcoceppi> I had a slightly older version of juju, testing.
<hazmat> jcastro,  can you pastebin  ec2-describe-instance-attribute --user-data $instance_id
<hazmat> oh.. client drift
<marcoceppi> not sure if that was the problem, but it could be
<SpamapS> http://cdn.memegenerator.net/instances/400x/16609550.jpg
<marcoceppi> <3
<marcoceppi> DIDN'T FIX :(
<marcoceppi> jcastro: i-e34a0f87 is the lastest instance id with this issue
<SpamapS> I'd be interested in seeing the userdata for a working and non-working instance too
<jcastro> does ec2-blah look for different EC2 creds than the variables I set?
 * SpamapS is about to pass out from starvation
 * SpamapS needs food.. BADLY
 * SpamapS shot the food
<marcoceppi> no changes between stanzas
<marcoceppi> dang this is getting really annoying
<_mup_> juju/status-changes r482 committed by kapil.thangavelu@canonical.com
<_mup_> all unit/machine agent states reported in key 'agent-state'
<jcastro> marcoceppi, how about just relaunching a new environment?
<marcoceppi> jcastro: we'd have to destroy the current env, which would mean no more omgubuntu
<jcastro> wait huh?
<marcoceppi> you talking about destroy-environment?
<jcastro> no
<jcastro> just creating a new one
<jcastro> and just "juju -enewthing deploy blah"
<jcastro> we can do that right?
<marcoceppi> Oh, like we talked about last night?
<jcastro> nod
<marcoceppi> yeah, we can try that
<jcastro> ok so maybe create a new one, but have hazmat and spamaps in there too
<_mup_> juju/status-changes r483 committed by kapil.thangavelu@canonical.com
<_mup_> all unit/machine agent states reported in key 'agent-state'
<hazmat> oi.. a clone env command sounds like fun
<marcoceppi> hazmat: that would be sweet
<marcoceppi> actually, hazmat, anything bad that might happen if we change the control-bucket and re-bootstrap an already "bootstrapped" aws account?
<hazmat> marcoceppi, ever seen ghost busters.. don't cross the streams ;-)
<hazmat> marcoceppi, if you change the control-bucket your changing the identity for all intents and purposes
<marcoceppi> so..that's a don't go there? I mean, the crossed the streams and took out the marshmellow man.
<marcoceppi> that was a good thing :)
<jcastro> Alright, this bucket is TOAST.
<hazmat> currently juju does environment identity by env name + control bucket data, changing those effectively orphan an existing an environment
<marcoceppi> so, what I've done is copied the stanza and made omgthisisreal with a different control bucket
<marcoceppi> so the old stanza is still there
<hazmat> and switching to an existing one, that's already bootstrapped.. and then bootstrapping again, it should error at you that the env is already bootstrapped.
<marcoceppi> darn
<marcoceppi> jcastro: We can just launch smalls on my AWS instead
<marcoceppi> nbd
<jcastro> hazmat, right so my concern is the current setup has the live site
<jcastro> indeed
<jcastro> marcoceppi, wait one, let me ask about reimbursement
<marcoceppi> *shrug* k
<jcastro> you've done enough work for the past 3 days, $ is where I draw the line. :)
<jcastro> oh hey, would generating a new pair work?
<jcastro> like just generating a new pair of access creds for AWS?
<_mup_> juju/status-changes r483 committed by kapil.thangavelu@canonical.com
<_mup_> hyphenate unit agent error states
<jcastro> hazmat, would that work?
<hazmat> jcastro, ECONTEXT, you mean giving out an IAM/subaccount off AWS?
<hazmat> or you mean getting access to the instances that are running that are not shell accessible?
<jcastro> yeah, if I generate a new access and private key to marco
<jcastro> and he puts that in his environments.yaml, and then bootstraps
<hazmat> jcastro, yeah.. that's viable
<jcastro> he can do what he needs without colliding with the existing running thing?
<hazmat> jcastro, as long as its a separate env yes
<hazmat> time to pick up the kido from day care, bbiab
<ninjix> SpamapS: can juju ask cobbler to provision more available instances?
<ninjix> looking at the cobbler.py, I only see functions that use only what cobbler has provisioned
<SpamapS> ninjix: not sure I fully understand the question. Cobbler is for provisioning servers..
<SpamapS> ninjix: it can treat VMs like servers.. but.. if you want to create new VM's, thats more koan's territory
<ninjix> SpamapS: i see.
 * SpamapS reads backscroll with agony
<SpamapS> marcoceppi: I think hazmat may have misunderstood your question
<SpamapS> jcastro: ^^
<SpamapS> If you guys want to use the same AWS account, but with a new environment.. just create a new environment name, with a diffeent control bucket. That is all.
 * hazmat back tracks
<marcoceppi> sweet
<hazmat> oh.. yeah.. totally
<hazmat> i thought it was switching a bootstrapped env bucket to another bootstrapped env bucket, thus losing the original in the process
<SpamapS> Yeah I know
<SpamapS> No they're just making a new env
<robbiew> I'd also recommend setting your default environment to the non-production one...so if you forget to use "-e", you don't hose the production deployment
 * robbiew learned that the hard way...lol
<SpamapS> YES
<_mup_> juju/juju-status-changes-spec r7 committed by jim.baker@canonical.com
<_mup_> Addressed review points, along with an expanded example
<SpamapS> in fact, I was thinking the other day that production envs should be named impossible to type things like x848310f0f0assdf9a9sf-dont-delete-me
<robbiew> heh
<imbrandon> SpamapS: and you have to enter it in reverse so no c/p
<SpamapS> and the terminal has to be in Cyrillic to read it
<imbrandon> heh, wingdings
<imbrandon> or whatever that gawd aweful font is named
<imbrandon> actually I've been using a font called Anonymous Pro lately for terms and IDE's and such and grown quite attached to it, despite the name its free ( as in beer ) dunno bout oss
<imbrandon> but quite nice
<imbrandon> created especially for terms and ide code use
<SpamapS> imbrandon: the Ubuntu fixed font was also created with that in mind
<imbrandon> SpamapS: yea but i could not find a way to get that other than ON ubnutu, and even as a core dev i dont spend much time on a ubuntu desktop if any actually, mostly all via ssh or remote forwarded x sessions rarely
<imbrandon> but if there is a ttf avail for me to use on OSX i'd be more than willing to give it a shot
<imbrandon> :)
<imbrandon> hrm i might look and see if the lic for it will allow me to package it in a brew app and make it easy for any osx users to use if thats the case
 * imbrandon assumes its on LP somehwere and goes to dig
<SpamapS> I'm quite confident the license is permissive :)
<imbrandon> i assumed it would be, and i found font.ubuntu.com :)
<imbrandon> ahh even a Cyrillic version , there we go SpamapS thats what the prod name must be in
<imbrandon> :)
<negronjl> jamespage: ping
<jamespage> negronjl, pong
<SpamapS> jamespage: did you disable the reboot test? I don't see it anymore on jenkins.qa.ubuntu.com
<negronjl> jamespage: had a question about the tomcat7 charm but, i figured it out.
<jamespage> SpamapS, its still running - just not appearing on the dash
<jamespage> I need to land the final fixes to usit tomorrow
<SpamapS> AHH ok
<jcastro> robbiew, yeah I know about default to not be prod, I also hosed myself in the past with that
 * SpamapS too
<cowmix> the juju docs say the project is 'early access' and don't use it for deployments. will that change for 12.04?
#juju 2012-03-20
<SpamapS> cowmix: hm
<SpamapS> cowmix: its definitely *far* better than when that statement was written
<SpamapS> cowmix: I think for certain use cases, its a good choice in production
<SpamapS> cowmix: https://bugs.launchpad.net/juju/+bugs?field.tag=production
<SpamapS> cowmix: those are known issues that should be considered before putting it there. Perhaps we should update the docs to point at that list.
<imbrandon> .win 15
<cowmix> SpamapS: thanks... I'm *very* excited about juju.. I'm trying out the 12.04 beta as soon as its out
<SpamapS> cowmix: beta1 is out. :)
<SpamapS> cowmix: I'd recommend using juju from the ppa though.. we should land a new version in beta2, and the PPA is much closer to that.
<cowmix> Awesome.. when 12.04 is fully baked.. use the release version then?
<cowmix> It's interesting because I have some friends that do HUGE EC2 deployments on Ubuntu and they didn't know about juju.. I turned them on to it and they seemed jazzed.
<cowmix> Canonical really needs to pump this up harder.. it solves an issue everyone is trying to solve on their own.
<SpamapS> cowmix: a little over a year ago, juju didn't exist.. so.. give us some time. :)
<SpamapS> cowmix: there's a charm contest going on right now.. you should get some of them to enter it. :)
<cowmix> SpamapS: is Canonical dog-fooding it now in their own infrastructure?
<sloth_> anyone got time for a really basic question
<SpamapS> cowmix: here and there, yes.. nothing important, as we only run that on released LTS's, and lucid can't run juju
<SpamapS> sloth_: just ask, and hang out for an answer, somebody will get back to you. Also we watch the juju tag on askubuntu.com a lot, so asking there might be a good idea
<sloth_> cool.  so I'm totally new to juju (and not a Ubuntu guy normally).  I'd like to use it provision a mongo cluster on amazon.  I'm not sure if I have to create a ubuntu EC2 instance to use as the provisioning host (that wouldn't be part of the 3 machine replication set) or if I can do it from my mac or where to start
<sloth_> I looked at the introduction to juju but I suspect I'm missing a few pieces of knowledge on how to get to where it starts
<_mup_> juju/status-changes r484 committed by kapil.thangavelu@canonical.com
<_mup_> collapse relation display in units unless error, all relation names are multi-valued
<hazmat> i'm going to hit a devops meetup tomorrow, we're hitting a few conferences this year
<_mup_> juju/status-changes r485 committed by kapil.thangavelu@canonical.com
<_mup_> status uniques when the same service has multiple relations to the same endpoint
<_mup_> Bug #959884 was filed: Improved status output <juju:In Progress by hazmat> < https://launchpad.net/bugs/959884 >
<_mup_> juju/force-upgrade r465 committed by kapil.thangavelu@canonical.com
<_mup_> finish upgrade-charm --force cli tests
<_mup_> juju/force-upgrade r466 committed by kapil.thangavelu@canonical.com
<_mup_> add to ignores
<hazmat> are pkg task sets addressable, ie. can i install the cloud/server task set into a container, or is it more of a preseed thing
<hazmat> ah.. tasksel
<hazmat> hmm.. doesn't want to work without a ttyl
<_mup_> juju/series-from-charm r464 committed by kapil.thangavelu@canonical.com
<_mup_> charm series is used as service system constraint
<_mup_> Bug #959914 was filed: Charm series is used for service series constraint. <juju:In Progress by hazmat> < https://launchpad.net/bugs/959914 >
<bkerensa> SpamapS: Uhh lots of suggestions and I dont know where to begin :) I guess I will bug jcastro for guidance tomorrow :P I must be off to bed now
<SpamapS> bkerensa: Sorry.. I am really excited about your charm which is why I'm being so picky about it. I will likely be a user. :)
<SpamapS> bkerensa: bug me for help too, I'm happy to assist. :)
<jamespage> whats the state of juju on lucid?
<jamespage> might be a funny question but I'd like to see whether I can get apache bigtop running with the hadoop charm
<hazmat> jamespage, it hasn't been tried in some time, the libzk libs need a more recent version to avoid some bugs there afaicr. we may have picked up some py 2.7isms
<jamespage> hazmat, ack
<jamespage> it was a 'if i get time' type activity TBH
<yolanda> hi all, i'm trying to use juju in openstack, but i receive an error in the machine every time i try to create one, any one knows about this problem?
<yolanda> hi, anyone can help with juju and openstack?
<james_w> yolanda, hi, what's the error?
<yolanda> hi, james_w, just my openstack instance is created, but the status is "error"
<james_w> yolanda, hmm, I'm not sure how to find out what caused the error
<jamespage> yolanda, could be a number of things
<yolanda> james_w, i see that the image used is ami-00000049, is that ok?
<jamespage> it might be best to take a look at the instance directly in openstack to see if you can get more info
<yolanda> jamespage, how can it be done? i only can see the instance when i do an euca-describe-instances, and see that status
<jamespage> yolanda, does it show error there as well?
<yolanda> no
<jamespage> yolanda, hmm
<jamespage> euca-get-console-output <instanceid> might show you a bit more
<yolanda> mm, let me see
<yolanda>  euca-get-console-output i-00001b40
<yolanda> UnknownError: An unknown error has occurred. Please try your request again.
<jamespage> yolanda, oops
<yolanda> not a very clear error :)
<jamespage> yolanda, thats a openstack is broken error
<yolanda> jamespage, but i'm running some other instances in openstack right now
<yolanda> i'll try to create that one manually
<jamespage> yolanda, you might be able to ssh to it - please try that
<jamespage> might be able to get some good debug out of it yet...
<yolanda> jamespage, cannot ssh, i can't see any internal ip
<jamespage> yolanda, what is its status? running?
<yolanda> i have 3 instances, i can see the internal ip for the previous two, but this doesn't show any ip
<yolanda> status is error
<yolanda> the privateIpAddress field is empty
<jamespage> yolanda, so basically it compelelty failed to start
<yolanda> yes, it seems
<yolanda> will try to create same image manually
<yolanda> ok, same image created with euca-run-instances work
<yolanda> so it's something with juju process
<yolanda> btw, what i want is to test some juju charm, if i cannot do it with openstack, what can i use? i did some tests with a personal EC2 account, but i need a large machine, and they started charging me an important bill, so i stopped it
<jamespage> yolanda, I think I know what it might be - lemme just check
<yolanda> ok, jamespage, thanks
<jamespage> yolanda, can you try with ami-00000048 please
<yolanda> jamespage, that can be specified with the default-image-id param, right?
<jamespage> yolanda, it can
<jamespage> yolanda, have you tried using the local provider for juju?
<yolanda> jamespage, yes, but gives another error: error: FallÃ³ al iniciar la red default
<yolanda> error: Requested operation is not valid: network is already active
<yolanda> Command '['virsh', 'net-start', 'default']' returned non-zero exit status 1
<yolanda> 2012-03-20 15:38:29,472 ERROR Command '['virsh', 'net-start', 'default']' returned non-zero exit status 1
<jamespage> yolanda, ah - I know what this is
<yolanda> i saw that i had to logout/login, but i already did it
<jamespage> yolanda: what does virsh list-network give you
<yolanda> unknown command "list-network"
<jamespage> sorry net-list
<jamespage> not list-network
<SpamapS> yolanda: did you logout/back in after installing juju and especially libvirt-bin?
<jamespage> yolanda, can you check that the account you are using is in the libvirtd groups as well please
<yolanda> SpamapS, yes
<yolanda> Nombre               Estado     Inicio automÃ¡tico
<yolanda> -----------------------------------------
<yolanda> default              activo     si
<yolanda> sorry but it's spanish output
<SpamapS> no problemo, yo hablo un poquito ;)
<yolanda> genial :)
<yolanda> jamespage, how can i check it?
<jamespage> yolanda, type 'groups'
<yolanda> groups
<yolanda> yolanda adm dialout cdrom plugdev lpadmin admin sambashare libvirtd
<yolanda> that?
<jamespage> yes - looks OK
<jamespage> 'libvirtd' being the critical bit...
<jamespage> hmm
<jamespage> I wonder
<jamespage> yolanda, I think thats a bug - juju parses the output of net-list and looks for 'active'
<jamespage> 'active' != 'activo'
<yolanda> damn
<yolanda> so how can i fix it? is it possible?
<SpamapS> wait
<yolanda> please don't say: use Ubuntu in english :)
<SpamapS> yolanda: dpkg -l juju
<SpamapS> That bug was fixed... we set LANG=C before calling net-list
<yolanda> Deseado=Desconocido/Instalar/Eliminar/Purgar/Retener
<yolanda> | Estado=No/Instalado/Config-files/Desempaquetado/Medio-conf/Medio-inst/espera-disparo/pendiente-disparo
<yolanda> |/ Err?=(ninguno)/Requiere-reinst (Estado,Err: mayÃºsc.=malo)
<yolanda> ||/ Nombre                            VersiÃ³n                          DescripciÃ³n
<yolanda> +++-=================================-=================================-==================================================================================
<yolanda> ii  juju                              0.5+bzr398-0ubuntu1               next generation service orchestration system
<jamespage> SpamapS, I was just looking at that
<jamespage> yolanda, I would recommend using juju from the team PPA - that is a very old version
<jamespage> (the one that is in oneiric)
<yolanda> jamespage, how can i config it? in which repo is?
<jamespage> yolanda, https://launchpad.net/~juju/+archive/pkgs
<yolanda> let me see
<yolanda> i wonder if that's the cause of all my problems
<jamespage> yolanda, not all of them but this one - probably
<jamespage> SpamapS, good catch
<jcastro> krondor, how's it coming along?
<jcastro> krondor, hey do you guys plan to nginx in the moodle charm?
<yolanda> let me test
<yolanda> i updated it, now i have this error:
<yolanda> Starting storage server...
<yolanda> could not connect before timeout
<yolanda> 2012-03-20 15:58:41,768 ERROR could not connect before timeout
<bac> SpamapS: gmb said he talked to you about packaging for python-shelltoolbox.  will your packaging work for a lucid build?
<SpamapS> We really should just push a newer juju into 11.10 and fore-go the proposed testing requirements. :-P
<SpamapS> bac: it might, but you can't deploy charms on lucid at the moment, so thats sort of moot. ;)
<krondor> jcastro:  it's going well so far.  We're going apache because the thought is most moodle users would need the auth modules.
<jamespage> yolanda, please try 'juju --verbose bootstrap'  - you might need to juju destroy-environment first
<jamespage> SpamapS, +1 its next to useless how it is ATM
<bac> SpamapS: our buildbot slaves will be running lucid containers and they need to use the package too, that's why i ask.
<jamespage> jcastro, lookings at the jetty/spdy stuff ATM - have a few ideas
<SpamapS> bac: I can just use python-support.. that will work back to lucid
<yolanda> jamesjpage, no timeout error now, seems i got it!
<yolanda> i'll check with status
<bac> SpamapS: that would be great
 * SpamapS goes afk for a bit
<hazmat> more recent versions set the locale to C explicitly before calling out to libvirt
<hazmat> via env vars
<jcastro> jamespage, right so I was thinking, something simple, that would end up being "this is a fast way to mess with SPDY" and then tell people about it. As a technology showcase, nothing like serious or something people would actually use.
<jamespage> jcastro, how about jenkins on SPDY?  I'm going to give it a whizz
<yolanda> jamespage, SpamapS, thanks a lot for the help
<jamespage> yolanda, np
<marcoceppi> I noticed something with hooks firing. My db-joined hook takes quite a while to execute. But as soon as I add-relation it shows as "state: up", if it eventually fails it shows the failure then. So, I know the relation hook fired but it won't wait for end of execution of the relation hook. Is this a bug or a intended usage?
<jcastro> jamespage, ok so that is sexy, yes, absolutely!
<m_3> marcoceppi: that's a known issue
<marcoceppi> m_3: Cool, I'm fine with just adding relation, exposing, juju status until the port shows as open (thus the end of the script is done)
<m_3> marcoceppi: yeah, we've had several discussions of this problem, but no solns afaik
<krondor> jcastro:  mayber I can let them pick ngninx or apache w/ a config directive.  Was thinking the same with postgresql / mysql.
<jcastro> krondor, yep
<_mup_> juju/trunk r486 committed by kapil.thangavelu@canonical.com
<_mup_> merge setuppy-fixes from paul [a=][r=hazmat,clint]
<marcoceppi> m_3: I wish I knew more about how juju launched hooks :\ one day I'm going to start digging in to the source code
<SpamapS> marcoceppi: open-port is actually supposed to be the signal that your service is ready.
<marcoceppi> SpamapS: yeah, that's what I've been using, it's just a wee bit confusing to see the relation state as up when the hook is still running, because then it goes from up to error half way though
<marcoceppi> Merely commenting on how it'd be nice if it went from state: "adding", to "up" or "error"
<ninjix> I am running into ssh host key errors after I destroy my environment and try to bootstrap a new one
<SpamapS> marcoceppi: yeah, I think a more appropriate word would be "active"
<SpamapS> ninjix: thats common
<SpamapS> ninjix: there's an open bug to have juju ssh use a generated known_hosts file to avoid that..
<jcastro> lmorchard, hey I heard you were having some juju problems, anything I can do to help?
<ninjix> how do you guys run your ssh_config?
<ninjix> did you set StrictHostKeyChecking off?
<SpamapS> ninjix: for production, I just deal with the pain. For testing I wildcard the test hosts and do StrictHostKeyChecking no
<ninjix> ;)
<SpamapS> the way I do that, which might be a bit evil, is by wildcarding by amazon region. All my production instances are in us-west-1 .. I test in us-east-1
<ninjix> ok, cool. just wanted to make sure I hadn't missed something basic
<ninjix> SpamapS: btw, thanks for your help over the last few days. I've now got juju and cobbler driving our Proxmox cluster
<ninjix> dev team is going to very pleased when I unveil this new cloud environment
<jcastro> this is quite cool
<jcastro> ninjix, when you're not so slammed it'd be nice to send this to the list.
<m_3> yeah, I'd love to see a writeup of that
<ninjix> sure
<SpamapS> ninjix: *NICE*
<jcastro> http://charms.kapilt.com/~patrick-hetu/oneiric/openerp-server
<jcastro> look at this readme
<jcastro> it's already better than all the stuff we wrote in the intial set of charms. :)
<jcastro> patrick and jseutter are killing it. :)
<SpamapS> jcastro: I'm so ashamed of mediawiki now.. I almost want to just nuke it and re-do the whole thing. ;)
<jseutter> huh?
<jcastro> jseutter, it's exciting to see you guys submitting these charms
<jseutter> jcastro: ah :)
<jcastro> shazzner's working on gitolite too
<ninjix> my service units keep appearing with <hostname>.localdomain why are they not using the domain delivered by DHCP?
 * SpamapS digs through the code to see where that value comes from
 * SpamapS goes a 4th level of indirection deeper and wonders what he has wandered into
<SpamapS> UGH
<SpamapS>         output = subprocess.check_output(["hostname", "-f"])
<SpamapS> ninjix: ^^
<SpamapS> that is just so wrong.. :-P
<ninjix> hmm... ok I'll work with the -f arg
<ninjix> just had 99% successful wordpress test run. Only thing that didn't work was that the 000-default site was left enabled
<ninjix> is that normal? I'm running today's PPA
<SpamapS> ninjix: its more complicated than that
<SpamapS> ninjix: basically the wordpress charm expects you to use the 'public-address' to access it..
<SpamapS> ninjix: I believe marcoceppi is working on fixing that
<SpamapS> ninjix: glad to hear you got it deploying though! :)
<SpamapS> marcoceppi: were you going to try and tackle the Host: header weirdness w/ wordpress?
<ninjix> the public address started working once I a2dissite 000-default
<SpamapS> ninjix: oh, hm, that sounds different
<jamespage> jcastro, that was painful - looks like upstream managed to not release spdy as part of the lastest jetty distro
<jamespage> I had to hack it in
<jcastro> nice
<ninjix> SpamapS: this seems to be Apache2 strangeness. The FQDN site looks properly configured but Apache stops using it as soon as I re-enable the default
<ninjix> ahh... this is happening because the 000-default is using the same FQDN as the juju created site
<SpamapS> ninjix: sounds like a bug in the charm really.. should just dissite the default
<SpamapS> ninjix: https://launchpad.net/charms/+source/wordpress/+filebug
<SpamapS> If you wouldn't mind. :)
<ninjix> no problem
<marcoceppi> SpamapS: That's a different issue with using HAProxy in front of WP
<ninjix> marcoceppi: I just appended the a2dissite to my local oneiric/wordpress/hooks/install
<ninjix> marcoceppi: will the next juju add-unit wordpress pick up the change?
<SpamapS> ninjix: no
<SpamapS> ninjix: you have to use upgrade-charm
<ninjix> does that imply that the my current env has the wordpress charm cached on the bootstrap instance?
<SpamapS> ninjix: yes, there's a webdav server that hosts the charm bundles
<ninjix> :)
<marcoceppi> actually, SpamapS I had a question about add-unit. When I add-unit it executes the charm without any real idea that it's an additional unit, correct? So it would be just as if I had done a deploy + add-relation + expose + whatever else I did to the previous unit
<SpamapS> marcoceppi: right
<SpamapS> marcoceppi: to make units aware of one another use peer relations
<marcoceppi> figured, follow up question
<marcoceppi> and answered.
<SpamapS> marcoceppi: though that one will proceed all the way through install -> started before being aware of the peer relationships
<SpamapS> marcoceppi: which has been a problem in the past for charms that need to work slightly different between 1 and 1+ nodes ;)
<marcoceppi> Lastly, is there a way within a non-peer relation hook to tell if there is another unit?
<marcoceppi> I'm trying to avoid having the db-relation-joined hook run a MySQL import when it already exists from the first instance
<SpamapS> marcoceppi: I'd use the database itself for that... not rely on juju for it
<marcoceppi> I guess just mysqladmin create and capture a non 0 exit or something?
<m_3> marcoceppi: there's a few different ways to do that... right... what SpamapS just said... check the db for a schema version or some content
<_mup_> juju/subordinate r522 committed by kapil.thangavelu@canonical.com
<_mup_> update unit deploy signature
<ninjix> I am getting an SSH key error when I execute upgrade-charm
<ninjix> ssh works manually to the remote_host value
<negronjl> jamespage: ping
<jamespage> negronjl, pong!
<SpamapS> ninjix: all commands have to ssh to machine 0 so they can talk to zookeeper securely..
<negronjl> jamespage:  Here is what I have so far for a combined tomcat6 and tomcat7 charm ( lp:~negronjl/+junk/tomcat )
<negronjl> jamespage:  if/when you get a chance, I would appreciate some feedback.
<SpamapS> negronjl: +junk?! ;)
<jamespage> negronjl, yep - I'll take a look tomorrow
<negronjl> SpamapS: It's a work in progress and I don't want to confuse anyone that may think that this is an actual usable charm yet.
<negronjl> jamespage: thx
<negronjl> SpamapS: I'll put it in the proper place and put it through the proper process once I get done with it.
<SpamapS> negronjl: roger. :)
<_mup_> juju/relation-id r488 committed by jim.baker@canonical.com
<_mup_> Pass through relation id
<ninjix> SpamapS: how does one add a second mysql unit as a slave?
<ninjix> I noticed the mediawiki charm utilizes mysql slaves
<SpamapS> ninjix: currently the mysql charm only supports one-way replication, so you 'juju deploy mysql slave-service' and then 'juju add-relation master-service slave-service'
<ninjix> starting get the hang this. So it is the relation call that determines which hooks fire
<SpamapS> ninjix: right
<ninjix> so a master-master creating hooks that understand how to handle 'juju add-relation master-1 master-2' and then a 'juju add-relation master-2 master-1'
<ninjix> that's going to dial up the complexity of the charm
<ninjix> maybe I can add a hook for mysql-mmm pkg
<SpamapS> ninjix: you'd only have one 'add-relation'
<SpamapS> ninjix: they're bi-directional
<SpamapS> ninjix: you might be able to achieve it just by doing that now though.. hmm.. hadn't thought of trying that
<ninjix> how would it discriminate between between simple slave and master-master with only one add-relation
<ninjix> ahh... answer my own question. Just learned how to use the <service>:<relation> syntax :)
<ninjix> very cool. I'm now scaling the database service with total ease.
<SpamapS> ninjix: I think thats our next t-shirt ;)
<SpamapS> jcastro: ^^
<ninjix> really liking the low barrier learning curve too
<jcastro> man nice dude
<jcastro> ninjix, send me a mail, we should  send you a juju shirt, jorge@ubuntu.com
<ninjix> me like schwag
<shazzner> hello
<shazzner> quick question, what could be up when after deploying a charm the newly created machine still has a 'not-started' state
<shazzner> and I can't ssh into it yet
<shazzner> just need to be patient?
<shazzner> well I destroyed it, then recreated it
<shazzner> now it works, weird
<bkerensa> jcastro: when does this contest end? I have a busy week and probably cant do work till this weekend on the charm
<marcoceppi> bkerensa: the 23rd
<jcastro> bkerensa, it's ok we can be flexible
<jcastro> the deadline is the 23'rd
<jcastro> but then there's a week of reviews, etc.
<shazzner> cool
<jcastro> this is the first one so it'd be dumb to make it "pencils down everyone!"
<_mup_> juju/relation-id r489 committed by jim.baker@canonical.com
<_mup_> Completed relation id refactoring
<SpamapS> I think if you already submitted the charm..
<SpamapS> and you are just dealing with my nit-picks..
<SpamapS> your entry will be considered positively ;)
<SpamapS> bkerensa: ^^
<bkerensa> kk
#juju 2012-03-21
<_mup_> juju/relation-id r490 committed by jim.baker@canonical.com
<_mup_> Comments, PEP8, tests
<_mup_> juju/relation-id r491 committed by jim.baker@canonical.com
<_mup_> Merged trunk
<_mup_> juju/subordinate r523 committed by kapil.thangavelu@canonical.com
<_mup_> revert
<shazzner_> hello
<shazzner_> had a quick question: when I deploy my charm, it works but I always get an "install_error" for the state when I run juju status
<shazzner_> any ideas on what could cause that?
<shazzner_> or what it even means :p
<shazzner_> ok nevermind, it was something that really was an error :p
<_mup_> juju/relation-hook-context r492 committed by jim.baker@canonical.com
<_mup_> Initial commit
<roubi> hi
<AlanBell> roubi: if you have a question about juju ask it in the channel
<roubi> it's my first time
<roubi> on freenode
<roubi> so
<roubi> am 'i
<roubi> in the rong way?
<roubi> is there any one who have tried juju on an ovh server
<AlanBell> roubi: I have not tried juju at all, but some others have, do you have a question you want answered?
<roubi> yes i've
<roubi> do you know some one who've tried it using an ovh server
<arashbm> Hi!
<arashbm> I have a question about charm development.
<arashbm> I'm developing a ruby on rails charm,
<arashbm> the problem is that not all units provide the same relations
<arashbm> but they all require the same relations with db and so on
<arashbm> should I spit it to two charms, one for those having http inteface and one for others (workers)?
<jamespage> arashbm, its possible todo that through a single charm with multiple personalities
<arashbm> Thanks jamespage. I looked trough the docs at juju.ubuntu.com/docs but I couldn't find any information about it.
<jamespage> arashbm, its not really a juju feature per-say
<jamespage> but you can write charms that behave in this way
 * jamespage looks for a good example
<jamespage> arashbm, what are you writing your hooks in?
<arashbm> you mean the scripting language?
<arashbm> I'm using bash for some
<arashbm> and ruby for the others
<jamespage> arashbm, OK
<jamespage> arashbm, does the http interface do anything without the workers?
<jamespage> i.e. would that be a valid deployment?
<arashbm> yes, there could be apps consisting only of web-server (simple apps not doing anything in background)
<arashbm> it could be a worker-only (like spiders and so on)
<arashbm> and it could be a mixture of both (some web and some workers)
<jamespage> arashbm, is this a generic ruby on rails charm or is it targetting a specific application?
<arashbm> It's generic
<bbcmicrocomputer> hmm... I seem to wait forever sometimes for a new charm to deploy on EC2
<bbcmicrocomputer> the EC2 instance itself is up, I can ssh into it
<bbcmicrocomputer> but the juju machine/unit agents seem to take a long time to deploy
<bbcmicrocomputer> any idea if this is known or being investigated?
<bbcmicrocomputer> hmm.. that's interesting, the machine agent process 'python -m juju.agents.machine' seems to be in a loop launching itself on this particular instance
<bbcmicrocomputer> keeps relaunching every second or so with a new pid
<Tribaal> hey folks, I'll be making a short presentation about juju to a local python user group tonight, any tips?
<jamespage> bbcmicrocomputer, is that upstart managed? it might have a respawn config
<bbcmicrocomputer> jamespage: yeah, I think so
<bbcmicrocomputer> jamespage: it appears the new instance has an old version of juju on it, only 0.5+bzr457-0ubuntu1 and the error of its relaunching is 'machine.py: error: unrecognized arguments: --nodaemon --session-file /var/run/juju/machine-agent.zksession'
<bbcmicrocomputer> jamespage: any idea how newer version of juju are supposed to propagate to instances?
<bbcmicrocomputer> jamespage: I assumed the instance was meant to use the same juju repository as what deployed the charm
<jamespage> bbcmicrocomputer, in your ~/.juju/environments.yaml file you can specify an origin:  ppa
<jamespage> I think by default it will use the version in Ubuntu rather than PPA
<jamespage> I think*
<bbcmicrocomputer> jamespage: hmm... how have I got away with not specifying ppa in my environment before then.. I'm sure the other instances have used ppa when I've deployed them from a ppa'ed client
<bbcmicrocomputer> jamespage: 'If this option is not set, juju will attempt to detect the correct origin based on its run location and the installed juju package.'
<bbcmicrocomputer> jamespage: it does seem to happen everytime a new juju ppa gets released
<bbcmicrocomputer> jamespage: so I guess it's something with my version being out of sync when that happens
<bbcmicrocomputer> jamespage: ok, thanks, I'll specify ppa in the environment
<arashbm> Is anyone  using local provider or providers other than ec2 for testing?
<jamespage> arashbm, I use the local provider for most of my dev/test work
<arashbm> I have problems setting up local provider on 12.04:
<arashbm> error: Failed to start network default
<arashbm> error: Cannot open network interface control socket: Operation not permitted
<arashbm> I tried sudo-ing bootstrap but no hope
<jamespage> arashbm, can you check a few things for me
<arashbm> sure!
<jamespage> arashbm, sorry - trying todo multiple conversations
<jamespage> as your normal user:
<jamespage> output from 'groups'
<jamespage> apt-cache policy juju
<jamespage> you should not sudo bootstrap - juju will call sudo as required.
<arashbm> $ groups
<arashbm> arashbm adm cdrom sudo dip plugdev lpadmin sambashare
<arashbm> $ apt-cache policy juju
<arashbm> juju:
<arashbm>   Installed: 0.5+bzr457-0ubuntu1
<arashbm>   Candidate: 0.5+bzr457-0ubuntu1
<arashbm>   Version table:
<arashbm>  *** 0.5+bzr457-0ubuntu1 0
<arashbm>         500 http://ir.archive.ubuntu.com/ubuntu/ precise/universe i386 Packages
<arashbm>         100 /var/lib/dpkg/status
<jamespage> couple of things todo there
<jamespage> 1) stick your user in the 'libvirtd' group and then do a 'newgrp libvirtd'
<jamespage> at the moment your user can't see the local virtual network OR create it
<jamespage> 2) Upgrade to the latest version of juju from the PPA
<jamespage> 'sudo add-apt-repository ppa:juju/pkgs'
<jamespage> arashbm, ^^
<jamespage> then give it another spin
<arashbm> Thanks :)
<jamespage> negronjl, tomcat charm looks good
<jamespage> negronjl, a few bits don;t work tho - how do you want feedback?  can't help but feel a merge proposal would make this easier
<roubi_> hi every body
<roubi_> is there  here any one who have tried working with servers from ovh
<jcastro> m_3, how's your thing coming along?
<roubi_> Hi,  We are trying to install and test orchestra and juju on two physical machines,  we have a problem with juju status. Could you help please
<negronjl> jamespage:  thx for taking the time to check it out.  regarding feedback:  merge proposal works ... or email or message pigeons too :)
<m_3> jcastro: hw issues atm... haven't done any real work yet today :)
<jamespage> negronjl, MP please
<hazmat> hi roubi_ so is orchestra by itself able to launch a machine?
<negronjl> jamespage: let me know the specifics of the things that don't work or, if you have time, you can fix them as well :)
<roubi_> Hi Hazmat, orchestra is not able to launch a machine
<hazmat> roubi_, figuring that out, independently of juju is a good first step
<roubi_> orchestra seems to be installed correctly but can not control power on/off the client
<jamespage> hi roubi_: still no luck?
<roubi_> yes jamespage
<jamespage> roubi_, have you managed to establish whether the required integrations are supported by your hosting provider (ovh)?
<jamespage> roubi ^^
<roubi_> I don't know how to do it
<roubi_> I mean this verification
<roubi_> :/
<jamespage> roubi: I remember you have two servers - one running orchestra
<jamespage> roubi: have you registered the second server in the orchestra/cobbler install on the first one?
<roubi> yes it is
<jamespage> roubi: and if you reboot the second server does it network boot from orchestra and re-install itself?
<roubi> when I allocated this 2nd server, it  has  ubutnu server 11.10 as OS
<jamespage> roubi: yes - but your setup needs to allow orchestra to perform an automated install on this server
<roubi_> jamespage :do you think that the 2nd server must be with out OS
<roubi_> ?
<jamespage> roubi_, whether it has an existing OS is irrelevant - it needs to be able to network boot and install from orchestra
<roubi_> Yes but Even when I tried this between two pc
<hazmat> adam_g, btw the openstack charms are looking really nice
<hazmat> i pulled them out yesterday to show off during a presentation
<adam_g> hazmat: oh cool :)
<hazmat> jamespage, i pulled out yours to show some nice bash charms as well
<jamespage> hazmat, sweet - hope they went down OK
<jamespage> its quite hard to decide when bash is not enough and you should switch to something like python
<hazmat> jamespage, they did.. although there was  some skepticism over going from dsl back to shell scripts, it was sort of a devops thunderdome deathmatch..  http://www.meetup.com/DevOpsDC/events/46606602/
<hazmat> the audience was pretty savy  and saw the overall benefits
<jamespage> hazmat, 'Four frameworks will enter, one framework leaves' - hope we won!
<jamespage> 'we' as in 'juju'
<hazmat> jamespage, not quite, but there's a always a round two.
<jamespage> lol
<zirpu> who won?
<hazmat> zirpu, probably chef, their were also some folks from the cfengine company, which was nice.. puppet had some usage but not as much love.
<robbiew> hazmat: m_3 might want your slides as he's been placed in a thunderdome for the DevOpsDays event in Austin :P
<hazmat> robbiew, i put mine together under the gun mostly from your lisa slides and some of SpamapS , ie. simple rehash, didn't really have time for anything new.. but i could definitely share some of my cage technique ;-)
<robbiew> "cage technique"...lol, nice
<m_3> hazmat robbiew: definitely wanna talk beforehand
<m_3> it'll be fun
<robbiew> m_3: I'll have shirts to win over folks ;)
<m_3> short intro though... so gotta pick a very few apropos slides
<m_3> robbiew: awesome
<robbiew> puppetlabs is sponsoring a damn happy hour :/
<m_3> yikes
<m_3> goes along with the puppet shirt tho
 * robbiew has the canonical credit card...just sayin
<m_3> :)
<robbiew> tbh, we'd probably do better by staying above the fray...maybe even showcase a charm with puppet (they're in main)
<robbiew> ...chef isn't in Ubuntu 12.04
<robbiew> at the request of OpsCode (for those reading this)
<m_3> right... planning on a charm pulling chef from gems though
<m_3> I'm thinking of framing it as we're staying _below_ the fray... underlying coordination/eventing framework... etc etc
<robbiew> m_3: well, "above" would support our devopsolution meme...but whatever :P
<m_3> robbiew: yeah, true
<mpl> so what's the best  way to communicate between python and Go nowadays? net/jsonrpc ?
<mpl> sorry, wrong chan
<_mup_> juju/relation-hook-context r493 committed by jim.baker@canonical.com
<_mup_> Fix failing tests
<jseutter> question: does juju talk to maas or cobbler?
<jaaap> could someone help me with my local charms, i have setup an ubuntu server with lxc and juju. I can deploy charms from the examples installed on the system, but not from my downloaded charms folder. I'm trying to # juju deploy --repository charms/ local:mysql
<jaaap> but this responds in
<jaaap> 2012-03-21 19:21:30,033 INFO Searching for charm
<jaaap> Invalid value for force_https: False
<jaaap> 2012-03-21 19:21:30,278 ERROR Invalid value for force_https: False
<marcoceppi> jaaap: this is due to a bug in older charms
<marcoceppi> *looks for bugs*
<jaaap> i just tried mysql, wordpress and munin all fails, loaded my charms with # charm getall ~/charms/oneiric
<jaaap> it's very weird, i just copied the munin charm from ~/charms/oneiric/munin into /usr/share/doc/juju/examples/oneiric and did # juju deploy --repository /usr/share/doc/juju/examples local:munin which works, but not with # juju deploy --repository charms/ local:munin
<marcoceppi> jaaap: right, juju checks all charms in the repository. If one has an error then juju will fail
<marcoceppi> it's been fixed in the latest ppa version
<jaaap> marcoceppi: i'm running Version: 0.5+bzr486-1juju3~oneiric1, did # sudo add-apt-repository ppa:juju/pkgs ; sudo apt-get update && sudo apt-get install juju charm-tools apt-cacher-ng zookeeper libvirt-bin lxc
<jaaap> i'm not sure if there is another way to get the latest juju
<marcoceppi> that's the latest, I believe. One second
<ninjix> SpamapS: doing some mysql-mmm installs today I got some ideas on creating a mysq-mmm charm
<_mup_> juju/relation-hook-context r494 committed by jim.baker@canonical.com
<_mup_> Flush any additional relation hook contexts that are read, then possibly written
<_mup_> juju/relation-id r492 committed by jim.baker@canonical.com
<_mup_> Remove unused function
<ninjix> SpamapS: thinking along the lines of having relation go  mysql-mmm-service << mysql-server
<_mup_> juju/relation-hook-context r495 committed by jim.baker@canonical.com
<_mup_> Merged upstream
<SpamapS> ninjix: sweet, I've heard good things about mmm
<krondor> anyone know what might be causing ERROR invalid value force_https: False when I juju deploy mysql?
<m_3> krondor: yes, check the config.yaml file of all the charms in your repo
<m_3> krondor: strict-typing just landed for this file... if it uses a string field like "False" you need to change it to boolean _and_ False (no quotes)
<krondor> m_3:  thanks found it, roundcube apparently.
<m_3> there're a host of them... I'd recommend only keeping the charms you need in your repo atm
<m_3> at least until we have this problem fixed
<krondor> m_3:  no worries, moved to just what I need and all is good for testing again.
<m_3> krondor: awesome
<jcastro> m_3, are the inprogress charms also mirrored on github?
<m_3> jcastro: nope
<m_3> only charms that have lp:charms aliases
<m_3> i.e., for oneiric
 * jcastro nods
<jcastro> is it hard to include the others?
<jcastro> just thinking outloud
<jcastro> if people want to github prior to submitting is what I was thinking
<m_3> jcastro: no, it wouldn't be hard... just was eliminating noise
<m_3> we'd have to figure out a good naming scheme
<m_3> I'll test out some stuff
<jcastro> ok
<m_3> jcastro: someone could fork one of the official repos into their own acct, then submit a pull request onto the main branches
<m_3> alternatively, they could start a charm on a repo in their acct
<m_3> then let us know it's ready for review
<m_3> it'd still be manual for now, but we could automate stuff if we get lots of interest in that channel of submission
 * jcastro nods
#juju 2012-03-22
<arashbm> Hi there!
<arashbm> I have a problem setting up local provider
<arashbm> my juju is updated with ppa and my user is a member of 'libvirtd' group
<arashbm> It appears that there is a problem in networking.
<arashbm> error: Failed to start network default
<arashbm> error: internal error Child process (/usr/sbin/dnsmasq -u libvirt-dnsmasq --strict-order --bind-interfaces --pid-file=/var/run/libvirt/network/default.pid --conf-file= --except-interface lo --listen-address 192.168.122.1 --dhcp-range 192.168.122.2,192.168.122.254 --dhcp-leasefile=/var/lib/libvirt/dnsmasq/default.leases --dhcp-lease-max=253 --dhcp-no-override) status unexpected: exit status 2
<arashbm> (it appears when I 'juju bootstrap')
<arashbm> I don't know why but the problem suddenly solved right now after stopping dnsmasq ('sudo service dnsmasq stop').
<hazmat>  arashbm sounds like you had multiple dns masqs running
<hazmat> odd though, its directly listening on the interface
<hazmat> but possibly the other one was binding to all interfaces
<arashbm> I think it's the default behavier of stock dnsmasq (as stated in /etc/dnsmasq line 96)
<arashbm> no sry
<SpamapS> hazmat: something definitely broken with destroy-service
<SpamapS> hazmat: several times I've done it and gotten 'no node' and had to run it again
<SpamapS> hazmat: I'll debug and get you a proper bug report tomorrow
 * hazmat takes a look
<_mup_> juju/relation-hook-commands-spec r8 committed by jim.baker@canonical.com
<_mup_> Removed support for using relation name with -r and corresponding proposal simplification; other edits
<_mup_> juju/relation-info-command-spec r8 committed by jim.baker@canonical.com
<_mup_> Now relation-ids and corresponding simplification
<_mup_> juju/relation-reference-spec r10 committed by jim.baker@canonical.com
<_mup_> Updated with respect to review comments
<shazzner_> has anyone had experience with the ceph charm?
<shazzner_> having some issues :/
<arashbm> hi, I have a problem developing a charm,
<arashbm> The problem is that not all service-units offer http interface, how can I manage that so that when for example haproxy is related to my service it only relates to those units offering http?
<arashbm> Is there a good example of such behavior in any other charms so I can look in the source?
<jaaap> trying to deploy a few juju charms locally on LXC.. but the charms are kept in state : pending. I installed wordpress, and in the /var/lib/lxc/ja-local-wordpress-0/rootfs/var/log/juju/unit-wordpress-0-output.log it's written "/usr/bin/python: No module named juju.agents" not sure if this is a bug, or if i'm doing something wrong
<arashbm> jaap: check your proccess and network stats, It's doing something!
<arashbm> jaap: took me a night of downloads (of course with 14KB/s)
<arashbm> jaaap ^^
<jamespage> morning all
<jamespage> arashbm, still working on that generic rails charm?
<jamespage> I was thinking about that last night
<jamespage> arashbm, I think the best way todo this would be to fail the relation for service units that don't offer http interfaces
<jamespage> the relation will then be marked as error - so uses know something is up
<jamespage> arashbm, re generic charms - the only one we really have ATM is the node-js.app charm
<arashbm> jamespage: yea! But I have problem sharing my code over Launchpad!
<arashbm> It's blocked in our country! :|
<arashbm> and I'm still trying to find a good example of services that offer an interface on only some units
<jamespage> arashbm, thats not helpful now is it :-(
<jamespage> arashbm, the closest I think you will get to it is to use the 'optional' field - https://juju.ubuntu.com/docs/charm.html#the-metadata-file
<jamespage> but its not enforced
<jamespage> arashbm, re the generic charm approach - I'm personally not sold on that
<jamespage> I like the idea of a template charm for stuff like that
<jamespage> but I think a charm should encapsulate a specific service
<jamespage> just my personal opinion :-)
<arashbm> jamespage, the problem is that a web serve or a worker in rails are very similar, they use the same code, operate the same environment, they user same db and so on.
<jaaap> arashbm: how would you debug the lxc instance? the instances got not ip address but sshd seems to be running (not sure how to connect), does nothing on the network (tried tcpdump), juju writes the service is started "2012-03-22 09:44:56,511: juju.state.unit@INFO: Started service unit wordpress/0" but state from juju status is pendingâ¦ hmm
<arashbm> jaaap: on `juju debug-log` it states that it's 'Creating master container...'
<arashbm> this step took me a long time with a spike in `perl` cpu usage
<jaaap> arashbm, weird though cause the system looks like it's doing nothing at all, cpu, men and network wise
<zyga> hi
<zyga> can I use juju to quickly spawn a VM for development?
<arashbm> How could I set up a provider on a bare metal ubuntu server? I plan to use it for development only!
<roubi> good morning
<jaaap> i took me some while to figure out how LXC works, finally after understanding the lxc-console command i was able to access the LXC container of my wordpress deployment to see what was going onâ¦ nothing. I tried to run the wordpress/hooks/install script manually and it just took a couple of seconds to complete. I'm wondering where this get stuck. I see dhclicent tries to open /var/lib/dhcp3/dhclient.eth0.leases where the folder /var/lib/dhcp3 does no
<jaaap> t existâ¦ just creating this folder makes the dhclient writes it's config file. Not sure if this is the bug? Anyway, the LXC container is still not running. How would i know what commands juju are trying to execute once the LXC is started?
<jaaap> correction, the LXC container is running but not the Juju service
<jaaap> would like to develop and submit my plugin for tomorrow, but with the struggling of juju bugs makes it a bit more complicated
<jdstrand> hello
<jdstrand> so I'm fairly prficient with juju/lxc now and have workarounds for the things that blocked my before. so now I moved to openstack and juju
<jdstrand> I have openstack running in a vm and it works perfectly fine with another client on the libvirt network starting and stopping instances via euca2ools
<jdstrand> however, when I try to use juju, some weirdness happens. these are all VMs and not real credentials, so I am not redacting anything
<jdstrand> here is my environments.yaml:
<jaaap> i see juju fails to execute python inside the LXC container for my wordpress deployment
<jaaap> root@ja-local-wordpress-0:~# /usr/bin/python -m juju.agents.unit --nodaemon --logfile /var/log/juju/unit-wordpress-0.log --session-file /var/run/juju/unit-wordpress-0-agent.zksession >> /var/log/juju/unit-wordpress-0-output.log
<jaaap> /usr/bin/python: No module named juju.agents
<jdstrand> http://paste.ubuntu.com/895046/
<jdstrand> the juju started host doesn't respond to pings and nmap is filtered on the public address
<jdstrand> I copied over my openstack.id_rsa* keys to the openstack vm and tried them on the private ip. ssh responded, but I couldn't login
<jdstrand> $ ssh -i ./.ssh/precise_openstack.id_rsa root@10.0.0.2
<jdstrand> Permission denied (publickey)
<jdstrand> meh
<jdstrand> $ ssh -i ./.ssh/precise_openstack.id_rsa ubuntu@10.0.0.2
<jdstrand> Permission denied (publickey).
<jdstrand> $ nmap -PN -p 22 10.0.0.2|grep ssh
<jdstrand> 22/tcp open  ssh
<jdstrand> $ nmap -PN -p 22 192.168.122.225|grep ssh
<jdstrand> 22/tcp filtered ssh
<jdstrand> $ ssh-keygen -l -f ./.ssh/precise_openstack.id_rsa.pub
<jdstrand> 2048 a9:4c:a5:47:55:da:af:77:db:d3:19:84:d0:5e:fa:a3  jamie@sec-precise-amd64 (RSA)
<jdstrand> $ nova keypair-list|grep mykey2
<jdstrand> | mykey2 | a9:4c:a5:47:55:da:af:77:db:d3:19:84:d0:5e:fa:a3 |
<jdstrand> so there are two things:
<jdstrand> euca-describe-images shows that when running an instance via euca-run-instances, it is in the 'default' security group and 'running mykey2'
<jdstrand> euca-describe-images shows that when running an instance via juju, it is in the 'juju-openstack, juju-openstack-0' security group and 'running None'
<jdstrand> (note that './.ssh/precise_openstack.id_rsa.pub' is /home/jamie/.ssh/openstack.id_rsa.pub on sec-precise-amd64 (the machine *not* running openstack but that can run euca2ools fine)
<jdstrand> )
<jdstrand> euca-describe-groups has the following: http://paste.ubuntu.com/895048/
<jdstrand> are these bugs I should file? is my environments.yaml wrong?
<jdstrand> here is a euca-describe-instances output showing an instance started via euca2ools (i-0000000b) and one started via juju (i-00000007)
<jdstrand> http://paste.ubuntu.com/895052/
<jdstrand> both were started from sec-precise-amd64 and i-0000000b is fully functional/accessible but i-00000007 is not
<jdstrand> looking at https://juju.ubuntu.com/docs/provider-configuration-ec2.html, it seems 'ec2-key-name' is no longer available...
<jdstrand> I should perhaps mention that the openstack vm is running on up to date precise, as is the other vm (sec-precise-amd64)
<arosales> jcastro: doing some browsing at omg and getting some "not founds"
<arosales> for all banner links
<arosales> ie http://omgubuntu.co.uk/go/2/http://www.omgubuntu.co.uk/2012/03/have-you-taken-the-ubuntu-user-survey/
<jcastro> arosales, can you hard refresh and try again?
<marcoceppi> jcastro: it doesn't look like the re-writes for that plugin are working. We'll have to add them to nginx
<marcoceppi> err arosales ^
<arosales> marcoceppi: ah, ok
<arosales> marcoceppi: and jcastro: nice work on OMG too, btw :-)
<jcastro> robbiew, so our man brandon wants to charm up drupal with the same hotness as wordpress, but he's on a mac
<robbiew> juju: guess needs to install ubuntu...or use a VM
<robbiew> or wait until 12.10...no other real option unfortunately
<jcastro> he's published mac software before, is it viable to show him the other work that was done perhaps?
<robbiew> that could work
<jcastro> ok I'll work that
<m_3> jcastro: I think it's worth pushing a vagrant spin-up... it'll install virtualbox and get ubuntu server up and running pretty quickly
<m_3> then lxc from there locally
<jcastro> oh the thing you mentioned last night?
<robbiew> doesn't AWS give t1.micro away?
<jcastro> robbiew, that's exactly what we used with a shared byobu instance.
<jcastro> it was pretty cool
<robbiew> nice
<jcastro> maybe after his charm imbrandon can work on that and/or a mac port. He's old school coredev so he knows how ubuntu works.
<robbiew> just remember go-lang is coming ;)
<_mup_> juju/environment-settings r487 committed by kapil.thangavelu@canonical.com
<_mup_> work in progress
<m_3> jdstrand: hi, reading backchannel now... I'll try to reproduce
<jdstrand> m_3: thanks! :)
<m_3> jdstrand: ok, so I'm checking the precise juju cli against openstack
<m_3> jdstrand: meanwhile, here're a couple of things to check:
<m_3> try just 'juju ssh 0'
<m_3> key handling has changed lately, so I'm not sure... but I think juju injects the default key found in the cli's env
<m_3> ec2-key-name is no longer used
<_mup_> juju/trunk r490 committed by kapil.thangavelu@canonical.com
<_mup_> [trivial] point juju to the charm store url [r=jimbaker,fwereade]
<m_3> jdstrand: you might also try with a larger instance size... m1.tiny might not have the resources necessary (I know this is a prob on ec2 t1.micros, but don't know how this compares to tiby on nova)
<m_3> jdstrand: I'll let you know in a bit if my previously-working juju openstack setup is still working with precise
<jdstrand> m_3: trying, 'juju ssh 0', but with 'running None' and nmap telling me it is filtered, I am not optimistic
<m_3> right
<jdstrand> m_3: I in the console log that cloud-init completed
<jdstrand> s/I/I saw/
<jdstrand> juju ssh 0 just hangs
<m_3> the biggest problem we've seen with juju on openstack has been that you either need to proxy ssh or attach a "public" address to the bootstrap node before issuing subsequent juju commands
<jdstrand> m_3: what envvar do I need to set in the cli env for the ec2-key-name?
<jdstrand> m_3: adam_g alerted me to that. I have setup /etc/nova/nova.conf to have --auto_assign_floating_ip. as mentioned, this works fine with euca2ools and you can see in my euca-describe-instances that it does have a the public address
<m_3> jdstrand: 'authorized-keys: <public key>' might work for that... the last working setup I had was using ec2-key-name (oneiric)
<m_3> jdstrand: authorized-keys do get injected by cloud-init in ec2... testing now on canonistack
<jdstrand> m_3: I can try that-- do you care if I tear down this instance?
<m_3> nope
<jdstrand> m_3: to be clear, that should have the full 'ssh-rsa ...<key> <user>@<host>' that I would find in the public key?
<m_3> jdstrand: yes... example https://pastebin.canonical.com/62855/
<jdstrand> m_3: can I omit 'authorized-keys-path: /home/jamie/.ssh/openstack.id_rsa' (note this does not use my default ssh key)
<m_3> usually have to watch lp output of import-ssh-id for CRs though
<m_3> yes, I'd pitch that
<jdstrand> m_3: this key is not in lp. should it be? this isn't canonistack, this is a private openstack (https://wiki.ubuntu.com/SecurityTeam/TestingOpenStack)
<m_3> jdstrand: no need, just common practice to `import-ssh-id <id> >> .juju/environments.yaml` and edit
<jdstrand> oh I see what you mean
<jdstrand> I thought you meant for the juju instance or something
<jdstrand> m_3: still have 'running None' in euca-describe-instances. here is updated yaml: http://paste.ubuntu.com/895293/ (again, these creds don't mean anything in the real world)
<m_3> jdstrand: ok, gimme a few to see what I can spin up
<jdstrand> m_3: fyi, euca-get-console-output shows the instance is installing things
<jdstrand> an ssh keypair was generated
<jdstrand> m_3: oh! this time the instance responded when I did 'ssh -i ./.ssh/openstack.id_rsa 192.168.122.225'
<jdstrand> m_3: however, could not login
<m_3> jdstrand: hmmm
<m_3> juju ssh 0?
<m_3> is it hanging or denying?
<jdstrand> $ juju ssh 0
<jdstrand> 2012-03-22 11:35:56,477 INFO Connecting to environment...
<jdstrand> 2012-03-22 11:35:59,915 ERROR Invalid SSH key
<m_3> nice
<jdstrand> I wonder if there is a race with the first time euca-authorize is used
<jdstrand> aiui, the security groups need to be in place before the instance is started
<m_3> each instance gets a new security group last I checked
<m_3> so they're adding new ones during startup
<jdstrand> destroy-environment does seem to clean them up
<m_3> and yeah, it's gotta be there first... then can be changed afterwards with 'juju expose'
<jdstrand> something certainly didn't work right earlier today with that though
<jdstrand> (even though euca-describe-groups showed the groups were there)
<m_3> it's worth trying it without authorized-keys or authorized-keys-path at all
<jdstrand> m_3: wouldn't that just add my default key? how would juju know to use the one I added to nova?
<m_3> it wouldn't use the one you added to nova... it would just inject your default
<m_3> don't think the nova one is necessary... that's injected by default so euca-tools can talk
<jdstrand> right, so it wouldn't match my keypair. I'm assuming that is significant
<jdstrand> hmmm, I am used to specifying -k <keypair> with euca-run-instances
<m_3> right
<jdstrand> I figured juju operated in a similar fashion
<m_3> juju injects its own... that _used_ to be what you told it to with `ec2-key-name`
<m_3> and it injects `authorized-keys`
 * m_3 looking to see if 'juju ssh' will take '-i'
<jdstrand> ok, then I guess the 'running None' bit makes sense
<m_3> so it seems like the problematic part is convincing the juju cli to use a particular key
<jdstrand> cause juju isn't doing the -k <keypair> equivalent to let nova know
<m_3> just guessing man... I need to look at the code to see what the latest behavior really is
<jdstrand> oh, now 'ssh -i ./.ssh/openstack.id_rsa ubuntu@192.168.122.225' worked
<jdstrand> juju ssh 0 does not
<m_3> right... we injected the public part of openstack.id_rsa, but aren't telling 'juju ssh' which keys to use
<m_3> does 'juju status' say permission denied?
<jdstrand> 2012-03-22 11:48:42,970 ERROR Invalid SSH key
 * jdstrand tries with authorized-keys-path
<jdstrand> $ juju status
<jdstrand> Environment config cannot define both authorized-keys and authorized-keys-path. Pick one!
<jdstrand> 2012-03-22 11:50:31,096 ERROR Environment config cannot define both authorized-keys and authorized-keys-path. Pick one!
<m_3> jdstrand: did you try with neither option defined?
<m_3> you'd have to bounce the environment, but it should inject your default key... and then _use_ that key to talk to the instance
<jdstrand> m_3: I will try that in a moment
<jdstrand> I think before I specified the private key to authorized-keys-path
<m_3> oh gotcha
<jdstrand> m_3: /win 30
<jdstrand> meh
<jdstrand> m_3:
<jdstrand> argh
<jdstrand> m_3: so, it seems that only after cloud-init is done does the image become unreachable
<jdstrand> I will try m1.small now
<m_3> jdstrand: ok
<marcoceppi> Can I set the value of a config in a charm?
<jdstrand> m_3: I'm going to file several bugs surrounding the ssh stuff in the meantime
<marcoceppi> Sorry, that was a fractured question. I want to set the value of a config key in a hook, does config-set exist and does is it accessible to hooks?
<m_3> marcoceppi: from a hook... not at the moment... you have to use 'juju set' from the cli
<marcoceppi> darn, k
<marcoceppi> I'm trying to capture a one-time routine, in this case flushing cache on all the units a charm runs on, so my idea was juju set flush-cache=true and have the hook reset it to false when it finished execution. Any ideas on a best practice for this?
<jcastro> hazmat, look in your g+ window please
<jdstrand> m_3: *sigh* my problem is that the bootstrap node installs libvirt-bin, which starts the 192.168.122.0/24 default network, which happens to be the ip address that the instance's public ip address is in
<m_3> yikes
<jdstrand> m_3: why does this node need libvirt-bin?
<jdstrand> s/node/instance/
<m_3> jdstrand: there've been _lots_ of problems with lxc doing the same sort of thing
<m_3> jdstrand: that's a bug
<m_3> jdstrand: it's a dep of the juju package because of LXC
<jdstrand> m_3: ok, I'll file a bug on it
<m_3> I thought that was filed and fixed in the ppa... looking now
<jdstrand> m_3: one way to solve this would be to break up the juju package to have juju-lxc, which is just a metapackage
<SpamapS> its not Depends
<SpamapS> its Recommends
<jdstrand> then juju can Suggests it and the tools can tell you to install juju-lxc if it is missing
<SpamapS> the problem is juju installs itself without doing --no-install-recommends
<m_3> ah
<jdstrand> ok, that would work too
<SpamapS> we don't actually need the 'juju' package for agents though
<SpamapS> I think we should probably split the package into juju-core and juju .. and only install juju-core on the boxes
<SpamapS> and juju would just be a metapackage
<jdstrand> SpamapS: filing a bug now
<m_3> marcoceppi: as a cache-flush, it's probably safe to do it whenever 'juju set' is called... you might not need anything complicated there
<marcoceppi> m_3: good point, especially since there are only two other config settings and they're only for s3 access keys
<SpamapS> hazmat: ^^ splitting juju into two packages might be simpler than trying to refactor the cloudinit stuff to use --no-install-recommends
<SpamapS> hey can we get the bug bot in here to also watch the bugs for the ubuntu juju package?
<jdstrand> SpamapS, hazmat: fyi, bug #962389
<_mup_> Bug #962389: juju Recommends on lxc installs libvirt-bin which causes problems when testing in virtualized environments <juju (Ubuntu):New> < https://launchpad.net/bugs/962389 >
<SpamapS> jdstrand: thanks... I may end up marking it as a duplicate of the other one we have open, but for now its good to have both
<SpamapS> jdstrand: btw I'm almost done refactoring aws-status into indicator-aws.. :)
<jdstrand> SpamapS: thanks, any idea when this might be fixed? I need to figure out some way to workaround this to finish my mir
 * jdstrand really doesn't want to have to regenerate his openstack vm
<SpamapS> jdstrand: well the simplest thing would be to drop the Recommends
<SpamapS> jdstrand: since juju bugs you about the package being missing.. perhaps thats the best short term solution
<jdstrand> if juju bugs you about it, then that does make sense (assuming it only bugs you when trying to use 'local:'
<SpamapS> in fact.. I think I will do that.. not everybody uses local
<SpamapS> jdstrand: yeah, it handles the missing package gracefully enough that I think we can just make it a suggests
<SpamapS> jdstrand: you may as well suspend your MIR anyway, I expect a FFE + Upload today
<jdstrand> awesome. that would work great for me :)
<SpamapS> jdstrand: I'll fix that bug in that upload as well.
<jdstrand> well, I have been working on it for days :)
<SpamapS> jdstrand: new provider is landing.. maas
<jdstrand> SpamapS: would you mind pinging me when you upload?
<SpamapS> jdstrand: I will ping you straight away yes
<jdstrand> thanks
<m_3> sad but true: charmtests.markmims.com... fixing the remaining charms with config.yaml typing issues now
<SpamapS> m_3: What a long strange trip it's been
<m_3> SpamapS: yup
<marcoceppi> A boolean config value, does it get translated to 0,1 or stay False,True?
<m_3> marcoceppi: dude... don't even ask
<SpamapS> hahahaha
<marcoceppi> <3 :D
<SpamapS> should be documented anyway
<SpamapS> I believe it will be 'False' or 'True'
 * marcoceppi makes a quick test
<SpamapS> since its just printed with  str(value)
<SpamapS> unless you ask for json
<m_3> yeah, it'll look like a string in the hook
<SpamapS> then it will be json pure
<m_3> which makes one wonder the value of strong typing
<marcoceppi> my bad, I really just wanted ENUM() for config options, to keep people from doing stupid things, got booleans instead
<SpamapS> m_3: as long as we use python it won't matter.. and Go will have to make sure it continues to print the values the same at least in the first release so we can transition.
<SpamapS> marcoceppi: yes, enum would be fantastic
<m_3> nice... charms that mix boolean:FALSE and int:1 in the same config
<m_3> (the int:1's used as a bool of course)
<SpamapS> m_3: I hope that at some point we can go back and clean up the old crusty charms that existed before we knew what a good charm was ;)
<m_3> SpamapS: me too... we can make a fix-it-friday or soemthing
<m_3> :)
<m_3> actually I'd like to get 'deprecated' parts into all the interfaces while still on oneiric
<m_3> then we can actually deprecate them in the bump to precise
<jimbaker> niemeyer, hazmat what did i do wrong in the code review of https://codereview.appspot.com/5836049/ ? i see the new version here https://codereview.appspot.com/5836049/patch/8001/9002, but it's not linked to the review comments https://codereview.appspot.com/5836049/diff/5001/source/drafts/relation-reference.rst
<hazmat> jimbaker, each submit generates a new diff that is empty of comments, previous comments are linked to diff revisions on the issue.
<_mup_> juju/trunk r491 committed by kapil.thangavelu@canonical.com
<_mup_> remove ability to spec charm store url via env variable per request.
<jimbaker> hazmat, so what should the flow be?
<marcoceppi> Should upgrade-charm put new configuration options in place on the charm? Or does it need a destroy/deploy?
<jdstrand> SpamapS: I neglected to say anything about indicator-aws
<jdstrand> SpamapS: cool! :)
<hazmat> jimbaker, not sure what your asking, the flow is typically you respond to previous review points via comments, work on the changes, resubmit and get a fresh review
<SpamapS> marcoceppi: config-get *should* see them
<jimbaker> hazmat, so the flow is lbox propose ... address comments ... lbox submit ... more work ...  lbox submit ...
<SpamapS> marcoceppi: and config-changed should run when they're changed.
<hazmat> jimbaker, yup
<marcoceppi> SpamapS: juju doesn't let me set against the units
<marcoceppi> let me try another upgrade
<jimbaker> hazmat, backing up... sounds like there's a fresh review in place then at https://codereview.appspot.com/5836049/patch/8001/9002
<jimbaker> so should be all set
<SpamapS> marcoceppi: juju set omg-wp clear_cache=something isn't working?
<niemeyer> SpamapS: juju is consistently getting stuck on juju status in 486
<niemeyer> SpamapS: Are there (or were) any known issues around htat
<niemeyer> ?
<marcoceppi> SpamapS: yeah, but I think upgrade-charm failed
<m_3> ok, much better... charmtests.markmims.com
<SpamapS> niemeyer: nothing I'm aware of
<SpamapS> marcoceppi: the hook failed?
<marcoceppi> \o/ minecraft is fixed
<marcoceppi> SpamapS: it looks like the upgrade-charm hook failed
<marcoceppi> So, nevermind
<eb015> I have a problem with bootstrap in juju: error: Environments configuration error: /home/localadmin/.juju/environments.yaml: environments.orchestra.acquired-mgmt-class: required value not found
<SpamapS> eb015: are you trying to use orchestra?
<_mup_> juju/trunk r492 committed by kapil.thangavelu@canonical.com
<_mup_> [trivial] update store unit test to point to temporary elastic ip
<eb015> yes
<eb015> My environments.yaml is juju: environments
<eb015> What's mean update store unit test to point to temporary elastic ip?
<imbrandon> marcoceppi: at some point in the next ~12 hour i'll do that op code cache clear, its very simple , just need to get my bearing waking up and such
<eb015> I'm trying to use orchestra...
<marcoceppi> imbrandon: I've already got something in place, it's not elegant though
<imbrandon> marcoceppi: rockin kk
<_mup_> juju/trunk r493 committed by kapil.thangavelu@canonical.com
<_mup_> [trivial] full test suite update for cs url, update all references to use a constant in repository mod
#juju 2012-03-23
<_mup_> juju/trunk r494 committed by kapil.thangavelu@canonical.com
<_mup_> [trivial] remove setuptools dependencies, switch charm store back to domain, sans default-series use precise
<_mup_> juju/relation-hook-context r496 committed by jim.baker@canonical.com
<_mup_> Make relation id lookup robust
<yolanda> hi, good afternoon, i'm trying to deploy a service in a local instance, and i have an error, can you help me'
<yolanda> ?
<jamespage> yolanda, sure - what are you seeing?
<yolanda> i see that error every time
<yolanda> LXCError: debootstrap is /usr/sbin/debootstrap
<yolanda> mkdir: cannot create directory `/var/lock': File exists
<yolanda> failed to execute template 'ubuntu'
<yolanda> and then when i try to add a relation i see that error:
<yolanda> No matching endpoints
<yolanda> 2012-03-23 13:28:32,003 ERROR No matching endpoints
<yolanda> jamespage ^
<jamespage> yolanda, looks like to different errors
<jamespage> what does juju status show you ATM?
<jamespage> to/two
<yolanda> machines:
<yolanda>   0:
<yolanda>     dns-name: localhost
<yolanda>     instance-id: local
<yolanda>     instance-state: running
<yolanda>     state: running
<yolanda> services:
<yolanda>   openerp:
<yolanda>     charm: local:oneiric/openerp-1
<yolanda>     relations: {}
<yolanda>     units:
<yolanda>       openerp/0:
<yolanda>         machine: 0
<yolanda>         public-address: null
<yolanda>         relations: {}
<yolanda>         state: pending
<yolanda>   postgresql:
<yolanda>     charm: local:oneiric/postgresql-15
<yolanda>     relations: {}
<yolanda>     units:
<yolanda>       postgresql/0:
<yolanda>         machine: 0
<yolanda>         public-address: null
<yolanda>         relations: {}
<yolanda>         state: pending
<yolanda> 2012-03-23 13:36:14,209 INFO 'status' command finished successfully
<yolanda> sorry, i got disconnected, jamespage
<jamespage> yolanda, I realised (so kept quiet) :-)
<yolanda> i pasted you the status of my machines, could you read it?
<jamespage> yolanda, yep
<yolanda> ok
 * jamespage looks
<yolanda> let me pastebin the full error
<jamespage> yolanda, is that the official postgresql charm?
<yolanda> jamespage, yes, i grabbed with charm getall
<yolanda> the openerp is a new one i'm building
<jamespage> yolanda, and can you pastebin the metadata.yaml file for openerp as well please
<yolanda> that is the error: https://pastebin.canonical.com/62911/
<yolanda> and that the metadata from openerp: https://pastebin.canonical.com/62912/
<jamespage> yolanda, so the reason for the No matching endpoints message is that the interfaces have different names
<jamespage> openerp - interface: postgresql
<jamespage> postgresql - interface: pgsql
<yolanda> oh
<yolanda> first charm, so a newbie error :)
<yolanda> and the /var/lock problem?
<jamespage> yolanda, looking now - which version of ubuntu are you developing on?
<yolanda> oneiric
<yolanda> i used a local instance because openstack was giving error
<yolanda> i would desired to use precise, but i can't at the moment
<yolanda> also, i changed dependency to pgsql but still have the endpoints error
<yolanda> jamespage ?
<jamespage> yolanda, have you upgraded the charm in you local test deployment?
<yolanda> jamespage, a just did a getall
<yolanda> do i need to do something else?
<yolanda> mm, i destroyed environment and created one new, sorry, didn't understand the question
<jamespage> yolanda, yes but if you change metadata.yaml for openerp you need to upgrade the charm that juju is using
<yolanda> so just destroyed and created new one, so data is fresh
<jamespage> juju upgrade-charm --repository . local:openerp
<jamespage> OK - so that should be OK
<yolanda> i'm trying again
<jamespage> yolanda, have you seen this charm thats being worked on - http://charms.kapilt.com/~patrick-hetu/oneiric/openerp-server
<yolanda> jamespage, no , didn't see it
<jamespage> quite new still - might be worth collaborating
<yolanda> jamespage, the thing is that we use a package that we are building inside canonical
<yolanda> i should ask for permission in the team
<jamespage> yolanda, fair enough - that charm uses packages from a PPA for openerp
<jamespage> its version 6 - is that up-to-date
<yolanda> 6.1
 * jamespage knows nothing about OpenERP
<yolanda> it's our version
<yolanda> mm, it's different
<yolanda> 6.1 works very different than 6.0
<yolanda> but i see the relation between openerp and postgres, with db and db-admin, i'll try like that
<jamespage> yolanda, sorry - X died on me
<yolanda> i was saying that 6.1 it's very different from 6.0
<yolanda> no need for server and web, it comes together
<yolanda> i'm checking his metadata however, because that is the same
<yolanda> jamespage, the requires is the same, but i have this endpoint error, can something be wrong in my environment?
<jamespage> yolanda, thinking
<jamespage> might be worth checking the metadata.yaml in the postgresql charm you have locally
<jamespage> it should provide db:
<jamespage> interface: pgsql
<yolanda> yes, i checked
<jamespage> ack
<yolanda> it is
<yolanda> name: postgresql
<yolanda> summary: "object-relational SQL database (supported version)"
<yolanda> description: |
<yolanda>   PostgreSQL is a fully featured object-relational database management
<yolanda>   system.  It supports a large part of the SQL standard and is designed
<yolanda>   to be extensible by users in many aspects.  Some of the features are:
<yolanda>   ACID transactions, foreign keys, views, sequences, subqueries,
<yolanda>   triggers, user-defined types and functions, outer joins, multiversion
<yolanda>   concurrency control.  Graphical user interfaces and bindings for many
<yolanda>   programming languages are available as well.
<yolanda> provides:
<yolanda>   db:
<yolanda>     interface: pgsql
<yolanda>   db-admin:
<yolanda>     interface: pgsql
<jamespage> yolanda, I need to just finish off some testing - can you put you openerp charm somewhere I can see it and I'll try to reproduce
<yolanda> ok
<yolanda> jamespage, do you have access here? https://code.launchpad.net/~yolanda.robla/charms/oneiric/openerp/trunk
<jamespage> yolanda, yep - can see that
<yolanda> i wanted to target precise platform, and the openerp package in the repo is for precise, can it be causing a problem? i don't have any precise machine to test
<jcastro> m_3, SpamapS, marcoceppi: let's have a call late in the afternoon before I close the contest.
<marcoceppi> jcastro: sounds good, it's going to be an easy day for me at work
<marcoceppi> jcastro: I'll try to find a decent microphone
<jcastro> I think Dr. Mims is travelling today
<jcastro> so it'll just be us three, I figure close the contest around EOD and then we have all next week to review
<jaaap> wondering how i commit my first juju charmâ¦ followed the instructions and failed to run bzr push lp:~4-ja-d/charms/oneiric/appflower/trunk and get bzr: ERROR: Invalid url supplied to transport: "lp:~4-ja-d/charms/oneiric/appflower/trunk": No such source package appflower.
<jaaap> first time using bzr and launchpad
<eb914> I'm trying to install juju with orchestra but it doesn't work...
<eb914> error: Environments configuration error: /home/localadmin/.juju/environments.yaml: environments.orchestra.default-series: required value not found
* Topic unset by jcastro on #juju
* jcastro changed the topic of #juju to: http://juju.ubuntu.com || Charms at http://jujucharms.com || Want to write a charm? http://juju.ubuntu.com/Charms
<jaaap> Got it.. committed my first charm. Thanks for the great work of making juju.. this will bring a bright future, i'm sure
<yolanda> jamespage, did you see something strange in my charm?
<SpamapS> eb914: that means what it says, you are missing default-series
<SpamapS> eb914: are you following any specific documentation?
* jcastro changed the topic of #juju to: Charm contest ends monday! (extra weekend of hacking!) http://juju.ubuntu.com || Charms at http://jujucharms.com || Want to write a charm? http://juju.ubuntu.com/Charms
<marrusl> jcastro++  o/
<jcastro> SpamapS, hey, so marrusl and folks want an on the fly charm school. :)
<jcastro> marrusl, he's usually dropping off his kid at school, but he should be around
<jcastro> SpamapS, and he's left coast so I don't think 1:30 our time will be a problem
<jcastro> ^^ that message meant for marrusl
<marrusl> jcastro, hey.  makes sense.  good.  woo hoo, we got the A team!
<jcastro> marrusl, I'll be at a doctor's appointment though, but I am confident Clint can hook this up.
<_mup_> juju/relation-hook-context r497 committed by jim.baker@canonical.com
<_mup_> Verify visibility of relation ids from a given hook context
<jamespage> yolanda: think I know what the problem is - postgresql has two interfaces of type pgsql
<jamespage> db and db-admin
<jamespage> you prob need to be explicit
<jamespage> juju add-relation openerp:db postgresql:db
<jamespage> just testing now
<jamespage> (sorry by other testing took a bit longer than expected)
<yolanda> jamespage, np
<yolanda> btw, i'm going out for a while no
<yolanda> now
<jamespage> yolanda, OK - so when you get back I think that is the issue - just tested locally
<arosales> jcastro: m_3: you guys available for a quick chat with jamespage and I?
<arosales> jcastro: m_3: jamespage: I'll reschedule for Monday
<antono> hi all
<antono> anyone have hudge desktop freezes when juju starts local linux containers?
<antono> i cannot even move mouse
<be> I'm trying juju, my error is: ERROR Cannot connect to machine MTMzMjUwNzQ3Mi42NTUzNzcyNi41MzQwMjI (perhaps still initializing): Invalid host for SSH forwarding: ssh: Could not resolve hostname oneiric01.ubuntu.lan: Name or service not known
<SpamapS> antono: how much RAM do you have?
<SpamapS> be: you're using orchestra then?
<be> yes
<SpamapS> be: you have to be able to resolve the hostnames of the machines from your client
<antono> SpamapS: 8GB
<SpamapS> antono: I've started 4 containers on a machine w/ 4GB of RAM, so thats not typical
<antono> SpamapS: also i have SSD, can it affect lxc behavior?
<SpamapS> antono: mine is also an SSD actually.. a 128G macbook air
<be> what's the system to resolve it?
<SpamapS> be: DNS
<SpamapS> be: it should work to put that hostname in your /etc/hosts
<be> Thank you I resolved it but now the error is ERROR Cannot connect to machine MTMzMjUwNzQ3Mi42NTUzNzcyNi41MzQwMjI (perhaps still initializing): Invalid SSH key
<SpamapS> be: Can you explain to me what documentation you are following ? I have found that some of them miss the finer points
<jcastro> SpamapS, the moodle charm needs an early review, he's having a hard time with the db-relation and is kind of stuck, so I just tagged it.
<SpamapS> jcastro: mmk, will look at it first
<jcastro> <3
<be> https://help.ubuntu.com/community/UbuntuCloudInfrastructure
<SpamapS> be: ok that one looks like the one I fixed recently
<be> ?
<SpamapS> be: so its possible that your installation did not go well, because your SSH key did not end up on the installed machine.
<be> But I can connect with the client using ssh directly
<SpamapS> be: thats odd
<SpamapS> be: on the machine, are there juju agents running? (ps auxw | grep juju)
<SpamapS> be: also, what version of juju do you have installed? (dpkg -l juju)
<be> ps auxw | grep juju                                          ubuntu   15043  0.0  0.1   9252   908 pts/0    S+   18:08   0:00 grep --color=auto juju
<SpamapS> be: ok, so most likely something failed when installing or starting up juju
<SpamapS> be: so where are you at now?
<SpamapS> be: stuck with an ssh key error?
<be> juju status 2012-03-23 18:27:19,723 INFO Connecting to environment. 2012-03-23 18:27:20,233 ERROR Connection refused Unhandled error in Deferred: Unhandled Error Traceback (most recent call last): Failure: txzookeeper.client.ConnectionTimeoutException: could not connect before timeout Cannot connect to machine MTMzMjUwNzQ3Mi42NTUzNzcyNi41MzQwMjI (perhaps still initializing): could not connect before timeout after 2 retries 2012-03-23 18:27:4
<SpamapS> be: something broke during the cloud-init phase I think. Can you pastebin /var/log/cloud-init-output.log (it might have sensitive information in it, so please make sure it doesn't)
<SpamapS> be: there's a useful tool, btw, 'pastebinit', which you can use to just do 'pastebinit < /var/log/cloud-init-output.log'
<hazmat> SpamapS, i fixed a problem last night that was causing breakage.. the setuptools stuff from rev 486 was broken. latest fixed build is 494
<hazmat> er. good trunk rev
<hazmat> i've gone ahead and yanked the setuptools stuff, it was to support osx, but it caused issues for deploying branches
<hazmat> ppas/packages should have been okay though
<be> sudo pastebinit < /var/log/cloud-init-output.log -bash: /var/log/cloud-init-output.log: No such file or directory
<SpamapS> hazmat: right, waiting for the PPA build to succeed and then I will do another upload.
<_mup_> Bug #963304 was filed: If bootstrap node is stopped or rebooted in EC2, its address changes and all agents lose track of it <ec2> <juju:New> < https://launchpad.net/bugs/963304 >
<_mup_> juju/ftests r14 committed by gustavo@niemeyer.net
<_mup_> Fix after new environments.yaml option names.
<_mup_> juju/ftests r15 committed by gustavo@niemeyer.net
<_mup_> Tentative test timeout logic.
<_mup_> juju/ftests r16 committed by gustavo@niemeyer.net
<_mup_> Only move .ssh out of the way if not previously done by
<_mup_> an interrupted test run.
<_mup_> Also do replace ssh-keygen's input by /dev/null, so that
<_mup_> it doesn't ask for input ever.
<_mup_> Bug #963355 was filed: Agent lifecycle suport for subordinates <juju:In Progress by bcsaller> < https://launchpad.net/bugs/963355 >
<_mup_> juju/relation-hook-context r498 committed by jim.baker@canonical.com
<_mup_> Testing around child relation hook context flushes with parent, and crresponding bug fixes
<bkerensa> jcastro: can I pm?
<bkerensa> SpamapS: When you suggested "start-stop-daemon --start --oknodo --background --pidfile /run/subway.pid --startas /usr/bin/node -- subway"
<bkerensa> which hook did you propose this go into?
<bkerensa> the install or just in start?
<bkerensa> or if I could do this upstart how would I do that?
<bkerensa> or could someone point me to a charm that creates a upstart job so I can gank it
<SpamapS> bkerensa: in the start hook.
<SpamapS> bkerensa: I thought I recommended how to make an upstart job too
<_mup_> juju/relation-hook-context r499 committed by jim.baker@canonical.com
<_mup_> Verify child relation hook contexts are flushed
<bkerensa> SpamapS: I see it now and have Bug #944246 set back for review again
<_mup_> Bug #944246: Charm Needed: Subway IRC client/server <new-charm> <Juju Charms Collection:Fix Committed by bkerensa> < https://launchpad.net/bugs/944246 >
<SpamapS> bkerensa: awesome, just getting ready to start another round of reviews
<bkerensa> :D
<SpamapS> bkerensa: subway looking good, just running a few tests. :)
<_mup_> Bug #963560 was filed: Possible intermittent failure in juju.agents.tests.test_provision.FirewallManagerTest.test_watch_service_changes_is_called  <juju:New> < https://launchpad.net/bugs/963560 >
<_mup_> juju/relation-ids-command r499 committed by jim.baker@canonical.com
<_mup_> Initial commit
<_mup_> juju/relation-ids-command r500 committed by jim.baker@canonical.com
<_mup_> Merged upstream
<SubwayawbuS> :-D
<SubwayawbuS> SpamapS: you are the coolest person I've every known
<SpamapS> SpamapS: begone, peon!
<SpamapS> ring the bell! Subway promulgated
#juju 2012-03-24
<SpamapS> bkerensa: ^^ Well done! :)
<bkerensa> lol
<bkerensa> SpamapS: Does that mean it passes muster?
<SpamapS> bkerensa: yes, it also means it will be regularly tested by our automated testing system
<SpamapS> bkerensa: and when the built in backend charm store is out, it will be available as 'juju deploy subway' (without a local copy)
<bkerensa> SpamapS: Good stuff well I hope to make improvements on it in the future including upstart job handling and eventually a upgrade hook (if the author continues to add features)
<SpamapS> bkerensa: fantastic
<SpamapS> bkerensa: to make sure you have the latest, do 'bzr pull --remember lp:charms/subway'
<bkerensa> SpamapS: :P I dont get to be maintainer of my charm? :P
<SpamapS> bkerensa: after you push your changes into your own branch (lp:~bkerensa/charms/oneiric/subway/trunk) you can propose them for merging into the official charm with 'bzr lp-propose'
<SpamapS> bkerensa: you can be the maintainer if you want. You just have to become a member of charmers... which is really easy.. you just ask, and 2 of us +1 you ;)
<SpamapS> bkerensa: the trade off is, you are expected to also do charm reviews and bug triage. :)
<bkerensa> uhh maybe after 12.04 release :P I am far to busy atm :)
<SpamapS> bkerensa: we do it this way so that we can be sure that the charms are always up to date.
<SpamapS> bkerensa: also your version of the charm will be available in the store at cs:~bkerensa/subway
<bkerensa> nice
<bkerensa> is this store web-based or does it ship in a gui or something?
<SpamapS> bkerensa: we just want to keep the default set extremely high quality.. which means you have to seek our sponsorship for changes, or commit to being a full "charmer"
<SpamapS> bkerensa: the store is just a backend service right now. Its undergoing live deployment right now.
<SpamapS> bkerensa: notice if you type 'juju deploy subway' right now you get a DNS error ...
<bkerensa> SpamapS: ahh yeah
<SpamapS> bkerensa: there's also a readonly charm browser here http://charms.kapilt.com/~bkerensa/oneiric/subway
<bkerensa> SpamapS: pretty neat browser
<SpamapS> hazmat: ^^ another fan of your work. :)
<hazmat> SpamapS, its also available now at jujucharms.com
<hazmat> SpamapS, just put that together today after a request jcastro
<SpamapS> sweeeeeet
<SpamapS> hrm..
<SpamapS> there seems to be a bug in "broken" hooks
<hazmat> SpamapS, ?
<SpamapS> http://paste.ubuntu.com/897196/
<SpamapS> thats while inside db-relation-broken
<SpamapS> happens not in debug-hooks as well
<hazmat> SpamapS, what version?
<hazmat> we put a broken hook context into place
<SpamapS> ii  juju                    0.5+bzr494-1juju4~oneir next generation service orchestration system
<hazmat> ugh
<hazmat> oh.. that is the departed hook context
<SpamapS> hazmat: breaks the mysql charm in an interesting way :-/
<hazmat> SpamapS, so what would you expect it to tell you
<hazmat> SpamapS, the relation has been broken, its not coming back
<hazmat> there are effectively no members of the other side
<hazmat> so relation-list returns what?
<SpamapS> hazmat: I was using it to hack around bug #791042
<_mup_> Bug #791042: *-relation-broken has no way to identify which remote service is being broken <juju:Confirmed> < https://launchpad.net/bugs/791042 >
<hazmat> SpamapS, ah so you wanted to get the remote service name..
<SpamapS> right, I'm sure there are other ways to do what I want, which is to note that it is broken so the next time around certain things happen
<hazmat> hmm. we're doing some work right now to get relation-ids attached to relations.. and a relation-ids command.. to do relation cli api from any hook..
<SpamapS> I can probably just use more predictable usernames, and remove the username.. and that will achieve what I want.
<hazmat> that would still be available in a broken context, but its rather bespoke as a resource identifier
<SpamapS> hazmat: as long as its the same during joined/changed/departed and broken :)
<hazmat> it would be
<SpamapS> I can use it as the username/pass
<hazmat> it would look like db:42
<SpamapS> well not the pass
<SpamapS> but the username
<SpamapS> hazmat: so relation-list is just not allowed in broken?
<SpamapS> hazmat: also I still get '{}' when using juju-log in the local provider. Did we ever figure that out?
<hazmat> SpamapS, yeah.. i did a have quick look at that one.. but its low on priority list to fix
<hazmat> SpamapS, i think it should just return an empty list if invoked during broken
<hazmat> its debatable
<hazmat> its effectively useless then, and at least the error informs somewhat
<hazmat> SpamapS, re juju-log its effectively printing its return value.. re the {}
<SpamapS> ah
<hazmat> anyways.. family time.. ttyl
<SpamapS> hazmat: have a good weekend!
<hazmat> SpamapS, you too!
<_mup_> juju/relation-ids-command r501 committed by jim.baker@canonical.com
<_mup_> Fixed tests
<_mup_> juju/relation-id-option r499 committed by jim.baker@canonical.com
<_mup_> Merge upstream
<_mup_> juju/relation-ids-command r502 committed by jim.baker@canonical.com
<_mup_> JUJU_RELATION default
<_mup_> juju/relation-id r493 committed by jim.baker@canonical.com
<_mup_> Rework internal relation id parsing
<_mup_> juju/relation-hook-context r500 committed by jim.baker@canonical.com
<_mup_> Docstrings
<_mup_> juju/relation-id r494 committed by jim.baker@canonical.com
<_mup_> Merged trunk
<_mup_> juju/relation-hook-context r501 committed by jim.baker@canonical.com
<_mup_> Merged upstream
<_mup_> juju/relation-ids-command r503 committed by jim.baker@canonical.com
<_mup_> Merged upstream
<_mup_> juju/relation-ids-command r504 committed by jim.baker@canonical.com
<_mup_> Docstring
<_mup_> Bug #963640 was filed: Implement external relation id support <juju:In Progress> < https://launchpad.net/bugs/963640 >
<_mup_> Bug #963642 was filed: Implement child relation hook context with lookup by relation id <juju:In Progress by jimbaker> < https://launchpad.net/bugs/963642 >
<_mup_> Bug #963643 was filed: Implement relation-ids hook command <juju:In Progress> < https://launchpad.net/bugs/963643 >
 * SpamapS pokes at the mysql charm
<hazmat> jimbaker, rock the house
<_mup_> Bug #963931 was filed: possibly non-deterministic test juju.unit.tests.test_lifecycle.UnitRelationLifecycleTest.test_join_hook_error <juju:New> < https://launchpad.net/bugs/963931 >
<arashbm> I'm trying to set up a local provider, How much time should it spend on 'Creating master container...' on a decent hardware but a slow connection? Is it trying to download something?
<arashbm> How can I find out whats going in inside lxc when juju debug-log says 'Creating master container...'?
<SpamapS> doh.. I just discovered help2man
<SpamapS> and it works perfectly w/ argparse generated output
 * SpamapS does a little dance
<SpamapS> yay, manpages for juju!
<imbrandon> nice
<jelmer> \o/
 * imbrandon is still trying to fight with it on OSX
<imbrandon> ok gas station , brb
<imbrandon> SpamapS: btw mentioned you in my blogpost , didnt think youd mind given the context
<SpamapS> imbrandon: we're all public figures. :)
<imbrandon> SpamapS: haha true
<imbrandon> SpamapS: wanna know something else about nginx that makes it even more of a magic wand, and as i thought of it is gonna make a good case for a _only_ nginx charm
<imbrandon> SpamapS: #       server {
<imbrandon> #               listen     localhost:110;
<imbrandon> #               protocol   pop3;
<imbrandon> #               proxy      on;
<imbrandon> #       }:)
<imbrandon> :)
<imbrandon> it speaks more than just http{,s} heh
<imbrandon> often overlooked , but if you already knew my bad :)
<marcoceppi> imbrandon: epic
<marcoceppi> holy crap, just thinking about that actually
<imbrandon> marcoceppi: heh
<imbrandon> yea
<imbrandon> its kinda epic and hard to wrap head arround all at once
<imbrandon> takes a bit to sink in
<imbrandon> but its not an after thought either, its fully suported and from the begning was there
<imbrandon> brb afk a min
#juju 2012-03-25
<arashbm> guys I get ` charm not found in repository` when deploying
<arashbm> I have correct folder tree (repo/oneiric/charm-name) and no syntax error in config.yaml or metadata.yaml
<_mup_> juju/local-cloud-img r487 committed by kapil.thangavelu@canonical.com
<_mup_> merge trunk
<_mup_> juju/local-cloud-img r488 committed by kapil.thangavelu@canonical.com
<_mup_> wip, scope is starting to feel to large on this one, leaving off
<marcoceppi> Anyway to change the verbosity of the juju commands? IE If I want to get rid of the INFO messages on the client side?
<SpamapS> marcoceppi: yes debug-log allows a level to be passed
<SpamapS> marcoceppi:   -l {DEBUG,INFO,ERROR,WARNING,CRITICAL}, --level {DEBUG,INFO,ERROR,WARNING,CRITICAL}
<SpamapS>                         Log level to show
<SpamapS> those are in the wrong order I think
<SpamapS> I think CRITICAL should be between INFO and ERROR
<SpamapS> hm no ...
<SpamapS> should be DEBUG,INFO,WARNING,ERROR,CRITICAL
 * SpamapS opens low bug w/ patch attached
<marcoceppi> Doesn't seem to wsork
<marcoceppi> juju status --level=WARNING
<SpamapS> oh
<SpamapS> that level of logging
<marcoceppi> yeah
<SpamapS> marcoceppi: that -l has to go *before* the subcommand
<marcoceppi> SpamapS: Ah! Gotchya
<marcoceppi> Excellent, thanks
<SpamapS> I believe that is going to be changed in the go version.. kind of annoying the way that works
<SpamapS> imbrandon: I did know that nginx speaks other protocols... I'm not sure it warrants its own singular primary charm.. but definitely as a subordinate it will be a swiss army knife of fun. :)
<_mup_> Bug #964640 was filed: debug-log online help needs some minor tweaks <juju:In Progress by clint-fewbar> < https://launchpad.net/bugs/964640 >
<imbrandon> SpamapS: well i was thinking a Nginx charm with examples like you said in sites-available
<imbrandon> but also a "sane" rockstar defaults too
<imbrandon> for normal web serving
<imbrandon> somthing that like omg-wp or others could depend on like the provies: db: mysql kinda thing and then build on
<imbrandon> but also have some kick ass examples in the sites avail for those that use it alone
<hazmat> marcoceppi, logging options for the client, are just -v before the subcommand
<SpamapS> imbrandon: right. All of that is good in the subordinate sense.. but nginx alone on its own host is only useful for specific uses like as a super high volume loadbalancer.
<imbrandon> nah. its quickly becoming the goto webserver, i think another year and it will have a good 15% of the market
<imbrandon> even for mom and pop shops that overoptimize
<SpamapS> imbrandon: but what use is a static webserver with no way to get files onto it?
<SpamapS> imbrandon: it will always be paired with something else
<imbrandon> i mean yea me and you know its overkill but tell johny he dont need a farri when the farri is the same cost as the pento
<imbrandon> ;)
<imbrandon> sure we use it with php-fpm now
<imbrandon> even for omg
<imbrandon> i guess i'm missing someething or assume something your not
<SpamapS> imbrandon: charms define all the things that go onto a server right now
<imbrandon> when i said an nginx applince i mean a general suped up server charm - the db
<SpamapS> imbrandon: when subordinates land, it will be easier to componentize things like nginx in a generic way.
<imbrandon> well yea , i'm thinking ahead, and using the provides db mysql think as a platform in my head
<imbrandon> thinking that drupal or wp can site onyop of a nginx provide and a mysql provide
<imbrandon> and reallly cant that be done now, i mean there is an example i thought with like something that was using a ton of provides and then just did a relation on them all at the end
<imbrandon> were they all on single instances ? i dident look close
<SpamapS> imbrandon: 1 charm, 1 instance
<imbrandon> ouch ok , yea i dident have that part in my head
<imbrandon> ouch ouch
<SpamapS> imbrandon: subordinates let you define special charms that will relate to services via the filesystem or 127.0.0.1 ...
<imbrandon> anything i can do to put my code where my mouth is and help the
<imbrandon> compartmentalizing up
<imbrandon> or what you called it
<SpamapS> imbrandon: so in that case, we can make every PHP app a subordinate to nginx, or lighttpd, or whatever, and just have them always run php5-fpm
<imbrandon> right
<imbrandon> yea
<SpamapS> imbrandon: not really. The branches are in review and landing in trunk. Once that lands, you can write the nginx charm. :)
<imbrandon> i like that idea, even better if it lets you overlay like somehinh along the lines of say
<imbrandon> casperfs
<imbrandon> i mean not litterly but thats the idea in my brain
<imbrandon> a filesystem overlay of charm ontop of chamr
<SpamapS> imbrandon: well they all just get extracted and run inside a single instance.. not sure where overlays would come in.
<imbrandon> charm* so like a base nginx with and then a fcgi or fpm php and then a drupal but the drupal sets fpm settings "overlayed" etcv
<imbrandon> instead of conflicts like the dpkg system
<imbrandon> so drupal can come with a specialized php.ini
<imbrandon> that overides the php-fpm php.ini charm
<imbrandon> intentionaly
<imbrandon> but in an overlayed type way
<SpamapS> imbrandon: Ahh I see now
<imbrandon> make any more sense, its like a brain puke i'm having here so some of its by the seat of my pants just learning charms are one per instance atm
<SpamapS> imbrandon: I think we can make that work with just charm store policy on how files get written by subordinates
<imbrandon> so not well thought out but still i think something along those lines is better than say a conflicts system
<_mup_> Bug #964658 was filed: juju crashes when talking to MAAS without an explicit port <juju:New> < https://launchpad.net/bugs/964658 >
<imbrandon> right, the hard(er) part is what happens if the drupal is replaced by wordpress, but i guss in that case its just re-deploy
<SpamapS> imbrandon: its more like Replaces in dpkg terms.. "let me overwrite that charm's files"
<imbrandon> and ont worry about restoring back to php and overlay somehting new
<imbrandon> yea
<SpamapS> imbrandon: yeah, the beauty of the cloud.. if you want to repurpose.. you just spin up new ones. :)
<imbrandon> yea like replaces, but i say overlay that way it can inderit defaults from the parent package even if that package normaly wouldnt support it
<ninjix> juju crew, do you have a team preference for Python editor or IDE?
<SpamapS> ninjix: I know hazmat uses emacs
<SpamapS> I prefer vim
<imbrandon> like php.ini is a bad example because it lets you only specify some settingswing natively
<imbrandon> but say ummm cant come up with a good example atm
<imbrandon> lol
<ninjix> is there a any coding style? PEP8 etc?
<hazmat> ninjix, emacs and vim are most common among the dev team, although i've been playing with sublime.. specifically for python.. pycharm or wingide are two commercial well done python ides
<SpamapS> imbrandon: there's also dotdee.. which lets you regenerate a config whenever files in a .d dir change
 * imbrandon uses Zend Studio + Espresso.app + nano ( but my charms are in bash or php heh )
<marcoceppi> I haven't seen a php charm yet, mostly bash/dash and Python
<SpamapS> ninjix: pep8 yes
<imbrandon> and vim-full when nessesary, but tryies to avoid it
<marcoceppi> I started writing one
<SpamapS> marcoceppi: mediawiki has a hook in php :)
<ninjix> I'm starting in on a Proxmox provider
<ninjix> any debug env tips?
<marcoceppi> booya :D
<imbrandon> marcoceppi: my drupal uber charm is in php
<SpamapS> made it simpler to read the configs
<marcoceppi> yeah
<imbrandon> bootsrapped in bash and majority of code is php
<imbrandon> to do all the rest
<hazmat> ninjix, proxmox in what sense?
<hazmat> ninjix, as a local provider or controlling remote containers?
<ninjix> I want my juju instances to launch faster so I'm going to cut cobbler middle man out
<hazmat> gotcha
<imbrandon> marcoceppi: basicly after apt-get install blah blah , then eveything else is in php, kinda half way intentional to be diffrent and since drupal sysadmins/users likely will know it more so than python
<ninjix> hazmat: remote proxmox qemu
<SpamapS> ninjix: remind me again.. proxmox is a virtualization frontend, like libvirt or XenServer ?
<hazmat> ninjix, cool, there's a half implemented xen server provider as well.. but the nutshell is look at providers/common/base.py  it basically defines the interface
<SpamapS> imbrandon: yeah thats part of the reason juju just uses execs.. lets people write charms in whatever language they're comfortable in
<ninjix> SpamapS: proxmox is a virtualization env more along the lines of ESX
<SpamapS> hazmat: there's one evil place in juju/unit/address.py where providers are ifelse'd
<hazmat> ninjix, here's a list of provider responsibilities, its a bit dated, we have constraints for machine/image selection now.. https://lists.ubuntu.com/archives/juju/2011-November/000906.html
<hazmat> SpamapS, yeah.. its a dispatch for provider specific logic
<ninjix> hazmat: thanks
<SpamapS> hazmat: should be moved into the providers as a new method of MachineProvider
<imbrandon> yea, its not rally the _best_ for cli apps, but its definately not unpresidented , using the pear Console class and Drush ( php drupal cli app ) the community is already familiar with and saves me alot of code
<hazmat> SpamapS, the things running that shouldn't have access to the provider
<SpamapS> imbrandon: I wrote quite a few useful PHP cli apps at my former employer
<imbrandon> sides release early realase often right, who knows maybe it will have some vb6 code before all said and done ;)
<hazmat> SpamapS, but yeah.. it could use for some restructuring to push/centralize that knowledge to the appropriate place
<imbrandon> yea i have as well, most were / are just phing extenstions though :)
<imbrandon> back in a bit
<ninjix> hazmat: ahh... wish I had the link last night... had to discover most of those behaviors and methods the hard way. :)
<hazmat> although, mostly its just a dictionary lookup based on provider type which is commonly known, its an additional responsibility that's not advertised very well
<imbrandon> btw /waves , hi everyone else
<SpamapS> hazmat: ultimately, IMO, it should only take a directory in juju/providers to recognize a provider....
<SpamapS> hazmat: they're so close to being plugins already :)
<SpamapS> hazmat: just have to drop the hooks in juju/unit/address and juju/environment/config
<imbrandon> so hows the namespace thing work with charms , first come first server ? are there plans to let remote charms be listed ( if anyone here is familiar with the "brew" and "tap" structure for OSS OS X apps is what i'm thinking, its like bsd ports and dpkg mixed up, but in a goood way )
<SpamapS> imbrandon: flat namespace just like packages.
<imbrandon> hrm kk
<hazmat> SpamapS, while its true we could make them pluggable, my attempts to get independently distributed plugins into core have met with resistance
<SpamapS> actually thats not entirelya ccurate..
<hazmat> at least to the extent they relied on python specific mechanisms
<SpamapS> imbrandon: local: and cs: are independent of one another..
<SpamapS> hazmat: hm.. I wonder why. :)
<imbrandon> hazmat: heh i think thats what i'm asking about too in another way
<SpamapS> any growth of the python version's capabilities means more for the go team to duplicate. I think its fair to resist non-essential extensions for that reason.
<SpamapS> imbrandon: so your local:foo won't interfere with cs:foo
<imbrandon> SpamapS: yea but what about dist, say i want something dist but not via the charm store
<imbrandon> like akin to adding a source to apt.list ( obviosuly diff )
<imbrandon> so github:foo
<imbrandon> or something
<SpamapS> imbrandon: no, there's cs, and local, thats it
<imbrandon> again alot of this is brain puking as i'm just now wrapping my head around alot of this , so if i'm being to bothersome tell me to stfu and rtfm :)
<hazmat> imbrandon, ninjix, at the moment, the best way to contribute a provider plugin is writing it in go.. but that won't be useful with 12.04. its not clear if we'd add any new pythons ones into 12.04.1 core if they didn't have go analogues.
<SpamapS> imbrandon: of course, you can also have  cs:~imbrandon/precise/foo
<imbrandon> ahh gonna bake in lp like bzr did ?
<SpamapS> imbrandon: its not LP actually, but yes
<SpamapS> imbrandon: the charm store does pull from launchpad.. but it doesn't have to
<imbrandon> cool
<ninjix> hazmat: no problem. I'm learning as I go here. Going to use the Orchestra provider as my starting template.
<imbrandon> hrm i guess thats where the bit hazmat was talking about adding a provider
<imbrandon> err wait
<imbrandon> not really cuz thats a deployment target not
<imbrandon> a source
<imbrandon> hrm
<hazmat> ninjix, ugh.. the testing for that one is a bit gnarly
<hazmat> ninjix, i'd use the  maas provider, its probably the simplest of the bunch
<hazmat> although its also the one with the least amount of real world usage.
<imbrandon> so if it dont have to how do i specify cs:~imbrandon/precise/foo == github.com/bholtsclaw/charms/foo+branch_precise
<imbrandon> i kknow there isnt now, but like is it planned or like a nono
<imbrandon> and alot of it is just semantics as once the inital charm is downloaded then it can come from anywhere
<imbrandon> i know that
<imbrandon> okies, actually i got to run anyhow, thanks SpamapS and hazmat for entertaining my thoughts for a bit, i'll be back in hour or two to finish the drupal charm up today hopefully
<imbrandon> and now i gotta come up with as uber name for it :)
<imbrandon> s#as#a
<ninjix> hazmat: ok
<imbrandon> SpamapS: btw dont you take weekends off ? hahaha j/k man
 * imbrandon is out
<SpamapS> I do
<SpamapS> and I'm out too
<SpamapS> :)
#juju 2013-03-18
<dissipate> hello]
<dissipate> is this just another puppet or chef?
<thumper> dissipate: no, not really
<dissipate> thumper, different use cases? i'm confused
<thumper> dissipate: AFAICT it juju operates at a different level https://juju.ubuntu.com/
<thumper> I don't know much about puppet or chef though
<dissipate> thumper, what 'level' are we talking about?
<thumper> http://askubuntu.com/questions/52840/whats-the-difference-between-juju-and-puppet-chef
<thumper> orchestration level rather than server level
<dissipate> thumper, i read that exact post
<dissipate> thumper, so is it like mcollective?
 * thumper doesn't know what mcollective is
<dissipate> thumper, server orchestration for puppet
<thumper> um... perhaps similar
 * thumper is still new to juju
<davechen1y> dissipate: juju is different to host based services like puppet and chef
<davechen1y> at it's heart, juju holds the _service_ as the primary data item
<davechen1y> where as puppet, chef, mcollective, cfengine, etc, all use the host as the primary data item
<davechen1y> we (canonical) believe this is important because in a world full of virutal servers, like aws and openstack
<davechen1y> you create and destroy hosts constantly
<davechen1y> to making the host the primary data item doesn't sound very smart
<iggy> except if you want to run multiple "services" on a host
<davechen1y> iggy: we are working on 'colocation'
<davechen1y> which will let you place multiple service units on a host
<davechen1y> it's not ready yet
<iggy> that's promising
<AskUbuntu> Is it possible to edit same juju environment on different machines? | http://askubuntu.com/q/269362
<dissipate> davecheney, i see. so it's a different approach and philosophy?
<davecheney> dissipate: yes, i belive so
<iggy> my boss loves all things canonical/ubuntu, he wants me to start looking at juju and so far it's not really a good fit for us
<iggy> at least not from what I can tell
<dissipate> davecheney, i see. sounds like it could cause a stir in the devops community. a lot of people are into Puppet and Chef big time
<iggy> and bcfg2
<davecheney> dissipate: indeed they are
<dissipate> davecheney, is juju geared more towards developers vs. system administrators?
<davecheney> dissipate: i belive that juju is targeted at devlopers
<davecheney> to empower them to deploy their own applications
<iggy> I was assuming running a config mgmt system along side juju, personally
<dissipate> davecheney, cool. that's me.
<dissipate> am i reading this correctly?
<dissipate> only about 200 charms in the 'store'?
<dissipate> http://jujucharms.com/charms
<sarnold> dissipate: heh, I was surprised to see how many there were last time I browsed the charmstore, there's a lot more services out there than I expected :)
<dissipate> sarnold, 200 doesn't seem low?
<dissipate> does a service have to have an ubuntu package before a juju charm can be created for it?
<mariusko_> dissipate: no
<sidnei> m_3: moved the unittests to hooks/tests
<sidnei> since they are tests for the hooks anyway
<m_3> sidnei: thanks, I'll take a look
<_mup_> Bug #1156649 was filed: pyJu fails to deploy/upgrade charm with 32MB payload <juju:New> < https://launchpad.net/bugs/1156649 >
<negronjl> jcastro: queue is clean
<jcastro> negronjl: !!! I see what you did there.
<negronjl> jcastro: well ... It's not that I am doing much on this phone conf ... you're talking too much :P
<jcastro> I shut up
<jcastro> you won't hear me again
<negronjl> jcastro: that's it ??? no complaining about the queue ? :)
<negronjl> jcastro: BTW.  I am going to Oakland
<jcastro> \o/
<jcastro> good good
<negronjl> jcastro: ... that means that you should probably bring adobo :)
<jcastro> I've got like 2 bottles with your name on it
<negronjl> jcastro: nice
<negronjl> jcastro: I know a PR restaurant here too
<jcastro> oh excellent
<SpamapS> Oakland?
<m_3> mgz: ping (re 0.7 release)
<mgz> m_3: hey
<m_3> mgz: hey, so what're the plans with that
<m_3> mgz: saw your mail about last Thursday... was that put on hold?
<m_3> mgz: reason I'm asking is I'm trying to figure out the right time to add update-alternatives to the packaging
<mgz> plan is mostly that, plus a few days, and only putting 0.5/0.6 in the respective ppas, rather than looking at sru
<mgz> the right time for update alternatives is this week, with 0.7 in raring I think
<m_3> mgz: should we hold off on update-alternatives as a separate sru?  or hurry
<m_3> mgz: ack... thanks
<mgz> and we don't touch the existing ppas
<mgz> (as in, keep them not coninstallable)
<mgz> SpamapS: Oakland Cemetery in Atlanta
<m_3> mgz: ack... lemme look at the packaging branch and see if the recipes can be decoupled
<mgz> I think I've dug up all of Clint's packaging bits and can make sense of them
<mgz> (there's an amusing "uncommit this" change on one of the branches where he was fixig up some mess I made I think)
<m_3> mgz: awesome, yeah I've walked through them briefly before
 * m_3 food... back later
<mramm> jcastro: you around?
<jcastro> yep
<jcastro> what's up
<mramm> got a few min to talk about those slides?
<mramm> and also about our messaging in general?
<SpamapS> mgz: if you need any help digging things out just let me know
<mgz> SpamapS: ta!
<jcastro> robbiew: mramm: m_3: our juju session was accepted at OSCON!
<sarnold> woo, congrats jcastro :)
<robbiew> jcastro: nice!
<jcastro> m_3: hah man, at 9am
<robbiew> doh!
<jono> arosales, jcastro all set?
<jcastro> yep
<arosales> @ https://plus.google.com/hangouts/_/ecb6fcef7b9fda1f8d2721c64f9cef5f8ae9e24d
<arosales> jcastro: jono ^
<jono> arosales, we had a hangout in the invite
<jono> we are there :-)
<jcastro> https://plus.google.com/hangouts/_/7d6aed364e1672b0a55deccaec6fb0823cd9cecd
<arosales> oh, thats the one I was using
<jcastro> jono: "Have fun storming the castle!"
<jono> jcastro, always :-)
#juju 2013-03-19
<jcastro> m_3: hey so, review grouping/breakup/whatever we call it
<m_3> jcastro: ack, should be easy to create a couple... just wanna minimize interruption of charm urls
 * jcastro nods
<ihashacks> SpamapS: I see m_3's comment on the puppet/puppetmaster mergre request, but then I see yours about " it may not be as cut and dry as usual"
<ihashacks> SpamapS: ... so what do I do? :)
<jcastro> arosales: any changes you want to do to the survey? or just roll as is?
<arosales> jcastro: which survey are you using, the general one?
<dpb_> Hi all -- what could be happening here: /var/lib/juju/units/lds-quickstart-0/charm/hooks/install: line 322: open-port: command not found
<dpb_> (command not found?)
<chrischris> dpb_: have you tried the `juju debug-hooks` command?
<jcastro> arosales: http://www.surveymonkey.com/s/ubuntu-juju
<jcastro> I believe you had me point it to a redirect thing
<jcastro> that you could swap in and out
<chrischris> dpb_: https://juju.ubuntu.com/docs/hook-debugging.html and/or https://juju.ubuntu.com/docs/write-charm.html
<arosales> jcastro: yup, thats the correct one.
<chrischris> dpb_: it'll crack open a shell during the different hooks and you can interactively see whats going on
<arosales> jcastro: I think that has the initial questions we want.
<jcastro> okey
<dpb_> chrischris: good idea
<arosales> mramm: do you see any questions you may want to add to http://www.surveymonkey.com/s/ubuntu-juju
<arosales> jcastro: I'll confirm with mramm real quick to see if any other questions may need to be added.
<jcastro> sure, no worries
<ahasenack> chrischris: the problem is that open-port and all the other juju commands were installed in /usr/local, not /ust
<ahasenack> chrischris: because environments had juju-origin: lp:juju
<ahasenack> chrischris: probably some "make && make install" is used in that case, and it defaults to /usr/local
<robmoore> Anyone know the status of logstash-agent?
<robmoore> It's listed on the charm store but when I try to deploy I see: 2013-03-19 11:35:32,569 ERROR Error processing 'cs:precise/logstash-agent': entry not found
<m_3> robmoore: hmmm... lemme look
<m_3> robmoore: grrrr.. it looks like the branch stacking is broken for that branch in launchpad
<m_3> robmoore: lemme see if I can fix it
<jcastro> m_3: ok, new stacked branch created
<m_3> stacked... :(... lemme think
<m_3> ok, scratch that... so go to the lp page for that branch
<m_3> and delete that branch
<m_3> once it's deleted, start over
<m_3> wipe your local repo
<m_3> bzr branch lp:charms/alice-irc
<jcastro> ok sec
<m_3> cd alice-irc
<m_3> bzr init lp:~irc-charmers/charms/precise/alice-irc/trunk
<m_3> bzr push lp:~irc-charmers/charms/precise/alice-irc/trunk
<m_3> should be unstacked then
<jcastro> Using default stacking branch /+branch-id/589819 at bzr+ssh://bazaar.launchpad.net/~irc-charmers/charms/precise/alice-irc/
<jcastro> Created a standalone branch (format: unnamed)
<jcastro> after the first command
<jcastro> so I think you're right?
<m_3> push the second and we can see
<jcastro> I did
<m_3> one sec
<m_3> awesome... `triton:~ $ bzr info lp:~irc-charmers/charms/precise/alice-irc/trunk
<m_3> sory, paste error
<m_3> bzr info <branch>
<jcastro> standlone branch. \o/
<m_3> ok, lemme see if I can promulgate that one
<jcastro> hey so if we're redoing this and renaming promulgate to "promotion", I won't complain
<m_3> haha
<SpamapS> noooo
<jcastro> I knew that would get your attention
<Catbuntu> Hi
<m_3> jcastro: ok, so everything looks good
<m_3> jcastro: next up...
<m_3> jcastro: wipe your local branch
<m_3> jcastro: bzr branch lp:charms/alice-irc
<m_3> jcastro: make a change
<m_3> jcastro: and push it back up to lp:charms/alice-irc
<m_3> change the readme as we'll keep that history
<jcastro> bzr push lp:~irc-charmers/charms/precise/alice-irc/trunk
<jcastro> as the push command?
<m_3> nope
<m_3> bzr push lp:charms/alice-irc
<m_3> then we do forensics :)
<jcastro> done
<m_3> ok, one sec
<m_3> jcastro: sweet
<m_3> jcastro: so you're not a charmer, but you made a change to a store-based charm by being in the group that owns the official branch
<jcastro> that's what we wanted right?
<m_3> jcastro: yup
<m_3> jcastro: so we can now make whatever "xxx-charmers" groups we want...
<jcastro> rock
<jcastro> so off the bat, the gui guys right?
<m_3> jcastro: we'll consider making them the owners of the official branch...
<m_3> under what conditions?
<m_3> ~charmers is added to the group
<m_3> what else?
<m_3> +2 by charmers or something?
<jcastro> yeah
<jcastro> +2 seems to be fine
<m_3> we give a lot of quality control over to the group
<jcastro> +2 is like our default policy for everything
<jcastro> SpamapS: marcoceppi: negronjl jamespage ^^^
<jcastro> thoughts?
<m_3> then reviews of group branches have to be reviewed by _either_ a group member or a member of charmers
<marcoceppi> So charmers needs to +2 anyone to be in that specific xxx-charmer group?
<SpamapS> no
<negronjl> marcoceppi, that's my understanding ...
<SpamapS> charmers needs to +2 any admins to that team
<negronjl> jcastro: care to explain ?
<marcoceppi> SpamapS: ah, just for the admins
<negronjl> SpamapS, thx
<SpamapS> because admins can add anybody they want
<SpamapS> I don't think charmers wants to be in charge of every team like that
<jcastro> negronjl: ok so tldr, we want to enable teams to take over a charm
<negronjl> jcastro: i got that
<negronjl> jcastro: just trying to be clear on the +2 part
<SpamapS> it would be good to include, in policy, an admonition to charmers to educate admins of those teams how important the membership is
<jcastro> +2 from existing charmers to make a team and put people in it?
<SpamapS> also those teams *must* stay closed.. no autojoining :)
<SpamapS> jcastro: I think this is worth a mailing list discussion, but IMO, charmers should be able to create a team, own it, be the admins of it, and add non-admins, without any charmers +2's
<jcastro> agree with mailing list
<SpamapS> think DM-Upload for Debian
<jcastro> ok so make up your mind, for the first 6 months you were like "don't make it debian"
<jcastro> now you're like "make it debian"
<SpamapS> if a DD thinks you're good, and your key is in the DM keyring.. they can just let you upload
<SpamapS> jcastro: Don't make it Debian. Don't throw out the GOOD parts of Debian either.
<SpamapS> ok time to get on plane
<SpamapS> just want to let you all know, good luck, we're all counting on you
<jcastro> heh
<negronjl> jcastro: I tend to agree that ~charmers create and own the groups
<m_3> jcastro: no on openstack-charmers... that group already exists, but they want it isolated for development
<negronjl> jcastro: It seems to be simpler ( at least for now )
<jcastro> ok
<jcastro> I will put that as the proposal on the list
<jcastro> and you guys ack/nack
<negronjl> jcastro: ok
 * m_3 reading backchannel
<m_3> jcastro: thanks... yeah, we can decide policy from the list
<jcastro> nod
<m_3> jcastro: it takes a little bit to create the group and switch the branch ownerships around
<m_3> jcastro: might be harder for larger existing sets of charms (i.e., take longer and be more interruptive of development)
<m_3> so the only non-charmers branches we have promulgated atm is alice-irc (I think)
<m_3> I'll leave that this way for a bit for more testing
<m_3> but plan to revert that one charm to charmers-owned in the future and deleting the irc-charmers team
<jcastro> upstream has abandoned alice anyway, we should just blow it away when we're done
<m_3> jcastro: ack
<m_3> we'll have to see how that effects github migration plans
<m_3> to be clear... nothing's been done with groups but testing so far... wanted to see it work
<marcoceppi> m_3 How does this change the namespacing? I'm assuming the lp:charms/... alias just points to lp:~xxx-charmers/... instead of lp:~charmers, right?
<m_3> marcoceppi: xactly
<marcoceppi> Cool
<m_3> marcoceppi: so there's a `charm promulgate --owner-branch <charm-dir>`
<m_3> that doesn't require it to be lp:~charmers
<m_3> otherwise it barfs
<m_3> i.e., without the option
<m_3> marcoceppi: btw, charm-tools help is pretty much borked as far as I can tell
<m_3> marcoceppi: I think the fix to that is to make the top-level help stupider... and ust pass everything to the subcommand
<marcoceppi> m_3 As in "marco wtf did you do" or "fyi while you're poking around"
<m_3> but I think we bounced back and forth on that one
<m_3> ha!
<m_3> marcoceppi: no... just while you're messing with it
<marcoceppi> Oh whew nothing I broke
<marcoceppi> Yeah, I've found it to be annoying too
<m_3> right
<m_3> totally
<marcoceppi> Everything should be handled by the subcommands, imo
<m_3> agree, but really up to the person who takes the time to go fix it :)
<marcoceppi> The only help "charm" should have is listing the available subcommands
<marcoceppi> ;)
<m_3> yup
<hazmat> jcastro, that's awesome re surge Edicts.. http://pastebin.ubuntu.com/5629367/
<jcastro> that is cool
<m_3> ok, so for pyju and goju to live together on the same machine...
<m_3> update-alternatives... no biggie
<m_3> here's the question:
<m_3> should we have versioned binaries.... i.e., /usr/bin/juju points to /usr/bin/juju-0.7 or /usr/bin/juju-2.0
<m_3> and so open-port would be
<m_3> /usr/bin/open-port-0.7 and /usr/bin/open-port-2.0
<m_3> --or--
<m_3> versioned directories?
<m_3> /usr/bin/juju points to /var/lib/juju/0.7/juju
<m_3> and /usr/bin/open-port points to /var/lib/juju/0.7/open-port
<m_3> any opinions?
<marcoceppi> I feel like, versioned directories is better? The only thing I've used update-alternatives for is java and that uses versioned dirs
<marcoceppi> I think Python does it via versioned binaries though
<negronjl> I think that versioned directories is better as well.
<marcoceppi> I see a small amount of users trying to do juju-0.7 vs using update alternatives, whereas I think versioned binaries makes more sense for something like Python
<m_3> cool... thanks y'all
<hazmat> SpamapS, you mentioned another oasis standard around the orchestration topic i think.. do you remember the name?
<SpamapS> hazmat: OASIS/CAMP
<hazmat> SpamapS, thanks
<hazmat> that looks pretty paas focused
<hazmat> but i can see the cf lineage now
<SpamapS> hazmat: versus juju which only looks like a paas when you read the charms.. ;)
#juju 2013-03-20
<robmoore> m_3: I see in the logs you responded to me but not sure if you were able to look at logstash-agent. It still seems undeployable.
<melmoth> hola, with the openstack provider, how can i tell i want my vms to use a give flavor (say m1.large) ?
<melmoth> ahh, may be constraint instance-type
<melmoth> got it
<melmoth> juju deploy etherpad-lite  --constraints  instance-type=m1.large
<melmoth> and you end up with /dev/vdb
<mgz> melmoth: right.
<mgz> you probably want to use the generic constraints instead though
<melmoth> like bootstraping with the constraint ?
<mgz> melmoth: like, saying cpu=4 mem=8192 (or whichever of those you actually care about instead)
<logix812> Anyone else have issues getting the Beta OSX version running?
<m_3> logix812: not sure of the status of the osx client... imbrandon? jcastro?
<dinopetrone> afternoon all. Attempting to install juju on osx, i ran into a small issue.  Seems like juju is requiring a class in yaml that i don't have
<dinopetrone> ImportError: cannot import name CSafeLoader
<dinopetrone> >>> yaml.__version__
<dinopetrone> '3.10'
<dinopetrone> i run into this issue when i try and run the command $ juju bootstrap
<mthaddon> gary_poster: howdy - would you be a good person to talk to about the output of "juju status" and what's considered an error (I'm thinking you may have thought about this from your work on the juju GUI)?
<gary_poster> mthaddon, hi.  Maybe? :-)  What do you have in mind?
<mthaddon> gary_poster: we'd like to parse the output of status for what things could be considered a problem in a juju env - wondered if you have any pointers. Looking for agent-state problems or relation-errors only, I think
<gary_poster> mthaddon, I see.  yeah, we categorize everything in a *-error state or with relation errors as units in error requiring attention
<mthaddon> gary_poster:  egrep 'relation-errors|agent-state:.*-errors' ?
<gary_poster> mthaddon, that sounds about right (though error, not errors).  I'm trying to find the code that actually aggregates this for us...
<gary_poster> mthaddon, fwiw, this is our code. http://pastebin.ubuntu.com/5631629/ The only maybe interesting thing is that this seems to say that there might be an "error" state (as opposed to *-error).  I don't recall having seen that, but perhaps it is reasonable.
<mthaddon> thx
<robmoore> m_3: I see in the logs you responded to me but not sure if you were able to look at logstash-agent. It still seems undeployable.
<m_3> robmoore: ack... I saw what was wrong, haven't had a chance to fix it yet
<m_3> look at it after current meeting
<robmoore> m_3: Thank you!
<jcastro> hazmat: either you or mramm go to the docket talk at pycon?
<hazmat> jcastro, no.. i have the src though
<jcastro> I mean docker, the lxc container "thing".
<hazmat> its still using lxc-start/lxc-stop
<jcastro> oh so for us it wouldn't be much different then
<hazmat> hmm.. i wouldn't say that..
<hazmat> i still think warden is probably a better fit out of the box atm
<hazmat> ie. its a rest api to lxc container creation (really direct cgroup) over a unix socket
<hazmat> used in cloud foundry
<hazmat> docker is just a cli atm
<jcastro> ok, I was just doublechecking it was on your radar
<hazmat> definitely
<hazmat> docker being go and lxc related.. definitely on the radar
<hazmat> at a minimum it might form a useful basis for at min an lxc wrapper, alternatively we could an remote api layer to it.. either way could be useful.
<m_3> robmoore: so it's a problem between the launchpad branches (they're stacked) and the charm store... still debugging
<m_3> robmoore: in the mean time, I suggest you deploy from a local repo...
<m_3> robmoore: either `bzr branch lp:charms/logstash-agent` or `git clone https://github.com/charms/logstash-agent`
<m_3> robmoore: stick that into a local directory like '~/charms/precise' and deploy with `juju deploy --repository ~/charms local:logstash-agent`
<m_3> robmoore: hopefully figure out what happened with that branch.  The fix I tried last night didn't work
<m_3> robmoore: fyi, it will most likely work once `bzr info lp:charms/logstash-agent` shows no personal (~paulc) branches
<robmoore> m_3: I appreciate the instructions. I'll give it a shot and let you know how it goes.
<m_3> robmoore: sorry man... this problem's been here a while... haven't had chance to clean it up
<m_3> btw, all ~charmers:
<m_3> to promulgate (promote a branch to be an official charm in the store)
<m_3> http://paste.ubuntu.com/5631671/
<m_3> the `bzr init` step is critical
<m_3> after you've promulgated, the `bzr info lp:charms/<charm-name>` should show no personal branches
<m_3> i.e., be unstacked
<m_3> I'll get that out to the list as well
<SpamapS> m_3: quite easy to put those steps into promulgate...
<m_3> SpamapS: yeah, good point
<m_3> SpamapS: I just learned about the init step last week.  I'd always just cloned a fresh version from lp:~charmers before promulgating
<SpamapS> m_3: seems logical to me. :-P
<SpamapS> m_3: anyway, its bzr/lp weirdness that no charmer should need to know about :)
<m_3> SpamapS: agree
<m_3> robmoore: ok, logstash-agent's fixed... should depoy fine from the store now
<robmoore> m_3: Excellent! Thanks for your help.
<m_3> robmoore: np... hazmat figured it out... more subtle bug than just branch stacking
<hazmat> jcastro, incidentally just pushed cs:~hazmat/precise/docker if you want to play with it
#juju 2013-03-21
<_mup_> Bug #1154033 was filed: ec2-provider -> FileNotfound with swift3-middleware <juju:New> < https://launchpad.net/bugs/1154033 >
<jcastro> m_3: there's a guy trying to charm up newsblur with a question on G+, I CCed you, though I did ask him to repost on the list so we have it there.
<jcastro> it's a mongodb charm question, but I don't think juan is up and about yet
<robmoore> I see references to auth-proxy in relation to the kibana charm. However, auth-proxy isn't listed in the charm store. Is there a reason it's not included?
<jcastro> looks like it's not in the store:
<jcastro> https://bugs.launchpad.net/charms/+bug/1083008
<_mup_> Bug #1083008: new charm - auth-proxy <Juju Charms Collection:Expired> < https://launchpad.net/bugs/1083008 >
<jcastro> we should probably reping paul to so see if we can give it another go?
<robmoore> It is listed here: http://jujucharms.com/~paulcz/precise/auth-proxy
<robmoore> But trying to run juju deploy cs:~paulcz/precise/auth-proxy fails
<robmoore> Is there a way to re-open the ticket (1083008)?
<SpamapS> robmoore: done
<robmoore> SpamapS: Thanks!
<hazmat> jcastro, robmoore that's waiting on a charm store refresh to expire cached entries..  for a few months the charm store didn't accept juju- in subordinate relation names. its been updated. but it cached the previous validation errors against the bzr revision
<jcastro> ack
<jcastro> thanks
<hazmat> i've got an outstanding request/script to clean it up.. in the meantime the alternative is asking the author to push a no-op commit
<hazmat> alternatively bzr branch and deploy it as a local charm
<hazmat> jcastro, did you have a play with docker/docker charm?
<jcastro> not yet
<robmoore> hazmat: Is there something you can point me to that would provide some detail on how to do the bzr branch approach?
<hazmat> robmoore, mkdir -p charms/precise && cd charms/precise && bzr branch lp:~paulcz/charms/precise/auth-proxy/trunk auth-proxy && cd ../.. && juju deploy --repository=. local:precise/auth-proxy
<mthaddon> anyone able to help troubleshoot a relation-error I'm seeing from pgbouncer -> postgresql - http://paste.ubuntu.com/5634594/
<sidnei> mthaddon: looks like it should loop over dbproxyid, relation-list only accepts a single one at a time.
<mthaddon> sidnei: ah, interesting - I was wondering if it needed to be an integer rather than db-proxy:6, but that makes sense
<mthaddon> negronjl: https://bugs.launchpad.net/charms/+bug/1046318 says approved and promulgated, but I don't see an official pgbouncer charm - http://jujucharms.com/search?search_text=pgbouncer
<_mup_> Bug #1046318: New Charm: pgbouncer <canonical-webops-juju> <Juju Charms Collection:Fix Released by charmers> < https://launchpad.net/bugs/1046318 >
<negronjl> mthaddon: https://code.launchpad.net/~charmers/charms/precise/pgbouncer/trunk
<negronjl> mthaddon: I wonder if promulgate borked the action ...
<negronjl> mthaddon: I'll check in a few minutes ...
<mthaddon> negronjl: ah cool, thx - weird that it doesn't show up in text search
<negronjl> mthaddon: I don't see anything wrong with it.
<negronjl> mthaddon:  let me get a second opinion .. m_3, can you look into this and see if you find anything wrong ?
<mthaddon> negronjl: should I file a bug against the charmstore?
<negronjl> mthaddon, yes
<mthaddon> k, will do
<mthaddon> negronjl: which project is that against, do you know?
<mthaddon> I'm guessing https://launchpad.net/charmworld
<negronjl> mthaddon, I think so
<mthaddon> thx
<mthaddon> bug#1158414
<_mup_> Bug #1158414: pgbouncer charm not searchable <canonical-webops> <charmworld:New> < https://launchpad.net/bugs/1158414 >
<sidnei> hum, apparently devopsdays is coming to Rio this august
<robmoore> hazmat: Encore merci!
<jcastro> negronjl: I got a question in G+ about the mongo charm, I pinged you on G+ there if you have time to respond
<negronjl> jcastro: I don't see any pings from you ... what's the question ?
<jcastro> https://plus.google.com/116015965439782966698/posts/VWjGipsVvaX
<jcastro> something about mongoengine (I am not familiar with it)
<negronjl> jcastro: his questions and concern revolve around mongoengine ( not mongodb )
<negronjl> jcastro: there's not much I can do there.
<jcastro> ok what should I tell him?
<jcastro> When I read it I was like "oh that must be part of mongodb"
<negronjl> jcastro: nope .. mongoengine is a python lib for mongodb
<jcastro> oh ok
<negronjl> jcastro: reach out to the django charm author
<jcastro> so the standard answer applies, install what you want, heh
<negronjl> jcastro: yup
<hazmat> jamespage, good call re mongodb.. https://jira.mongodb.org/browse/SERVER-9059
<hazmat> re 2.2 for r
<jamespage> hazmat, ouch!
<sidnei> oh, samuel clay. seen that name before... :)
<hazmat> changing the wheels while the car is in motion :-)
<thumper> hazmat: morning
<thumper> hazmat: wheels are too easy, lets change the motor
<Sander^home> Is it possible to set up the relation between two programs automaticly when those two are installed? Without doing any command? Maybe in a configuration menu when you install one of them.
#juju 2013-03-22
<hazmat> Sander^home, not at the moment
<hazmat> Sander^home, the future notion is that you could preconfigure a whole set of services and relations as blueprint/stack and reuse that.
<mariusko> Hi
<mariusko> I have a problem with new machines staying in pending state forever (>1 day)
<mariusko> 35: instance-id: pending
<mariusko> The are new units to existing services
<mariusko> They
<melmoth> loooks like juju cannot create new vm. if you are using openstack as a backend, do check you can start new vm with nova.
<mariusko> melmoth: it is aws
<mariusko> I don't see the pending instances in their admin console
<melmoth> i never used aws, but it looks like juju was not able to start a new vm.
<melmoth> you mayt want to try to fire up a new vm manually, just to check it works, and may be use --verbose with juju, to spot error ?
<mariusko> how do I do that? And is there somewhere I can get logs? In node 0?
<melmoth> not sure how you do that on aws. for the logs, i use --verbose, i m not sure where to look on node 0
<mariusko> "--verbose add-unit" gave no interesting information. The process probably happens asyncronly
<mariusko> juju debug-log neither
<mariusko> melmoth: hmm, now I see the problem probably. Machine 0 has: "agent-state: not-started"
<mariusko> Then not possible to SSH into, so I guess I need to force a restart of it
<jcastro> I've got some open questions with bounties if someone is looking to help out: http://askubuntu.com/questions/267409/could-not-internally-obtain-zookeeper-handle
<jcastro> http://askubuntu.com/questions/267689/agent-state-pending-in-juju-node-null-public-address-associated
<jcastro> marcoceppi: hah, new gitlab release
<jcastro> didn't juan just promulgate your older one?
<marcoceppi> jcastro, no someone else took it over
<marcoceppi> They're releasing once a month now, and this new release completely drops gitolite as a dependency
<jcastro> gotcha
<_mup_> Bug #1158841 was filed: fails to notice (or retry) if there's an error connecting to launchpad <canonical-webops> <juju:New> < https://launchpad.net/bugs/1158841 >
<mariusko> What to do about persistent "agent-state: not-started" on the master machine?
<AskUbuntu> How to add Landscape monitoring to machine 0 in juju | http://askubuntu.com/q/271298
<SpamapS> FYI, juju was just removed from Debian testing because of this FTBFS bug http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=702791
<SpamapS> IMO thats probably a good thing.. you don't want Debian users to be finding such an old'n busted juju for the next 2+ years
<jcastro> I agree, I prefer to try again with 2.0
<AskUbuntu> What to do when Juju machine 0 has got "agent-state: not-started" state | http://askubuntu.com/q/271312
<mariusko> Ideas about the above question? My environment is completely broken now :/
<jcastro> yeah, I think when the head node goes boom ...
<mariusko> Then you are lost?
<mariusko> Recreate environment is the only solution?
<mariusko> That must absolutely not happen if Juju is going to be ready for production use
<jcastro> yea, it's on the list to fix, as of now, the bootstrap node is not HA. :-/
<jcastro> there may be a way to recover it though
<jcastro> hazmat: ideas?
<mariusko> Ssh to its IP works, but the SSH key is wrong
<mariusko> I'm not sure which ssh key "juju ssh" uses
<SpamapS> jcastro: his head node isn't boom
<hazmat> SpamapS, 0.5.1 ouch..
<SpamapS> its working fine
<SpamapS> the machine agent just didn't start
<SpamapS> or died
<SpamapS> hazmat: I know
<SpamapS> hazmat: thats how Debian rolls
<jcastro> SpamapS: oh awesome, maybe we can recover
<SpamapS> tho, experimental still just has 0.6 :-P
 * SpamapS would love it if an actual juju user wanted to join the effort to maintain in Debian... :)
<jcastro> needs to support debian as a launched OS for people to care
<SpamapS> Juju really should get out of the game of defining everything needed to run its agent
<SpamapS> thats why Heat is more useful to me.
<SpamapS> the agents needed in heat are like.. 200 lines of python.
<SpamapS> and you can emulate them w/ curl calls
 * SpamapS lunches
<jcastro> koolhead17: ask.openstack.org looks great dude!
<koolhead17> jcastro, we made it finally
<koolhead17> :D
<koolhead17> jcastro, can we RT it from our ubuntu cloud handle :P
<jcastro> yeah
<jcastro> link me to your announcement please!
<koolhead17> Am one of the mods there & would love if we get some of our hackers there
<koolhead17> https://twitter.com/koolhead17/status/315177524005048320
<jcastro> got it
<koolhead17> jcastro, can we make an  internal announcement to in our list?
<jcastro> the ubuntu server list? sure, why not!
<koolhead17> jcastro, shoot it then. we need our folks in there too :P
<koolhead17> jcastro, will you be at OS summit?
<jcastro> yep
<Catbuntu> jcastro :3
<sarnold> https://juju.ubuntu.com/docs/subordinate-services.html#usage
<sarnold> what happens at the "juju deploy rsyslog" point?
<m_3> sarnold: not much... charm is looked up, validated, cached in bootstrap node... sort of "staged"... but nothing's deployed on any service units until relation-time
<sarnold> m_3: cool, thanks :)
<m_3> sarnold: /me wants to see that `juju deploy rsyslog --with <primary-service>`
<m_3> sarnold: but bigger fish atm
<sarnold> m_3: would that deploy the primary with the rsyslog already in place? or deploy the primary and then rsyslog?
<m_3> sarnold: desired workflow for me would be `juju deploy primary && juju deploy sub --with primary`
<sarnold> m_3: aha
<m_3> sarnold: but either way would work... just want fewer no-op steps
<m_3> aternatlively: `juju deploy primary --with sub1,sub2,sub3`
<m_3> but I prefer the former
<m_3> makes it easier to split concerns between primaries and "aspects"
<sarnold> it's funny you mention that, I keep thinking of the (little) I've read about aspect-oriented programming every time I re-pick-up the juju documentation..
<m_3> sarnold: yeah, the paradigm really fits with infrastructure
<m_3> peeps focus on primary infra
<m_3> but there's a _lot_ of work on aspects that cut across all primaries
<m_3> backups, log aggregation, authentication, etc
<sarnold> heh, I was just going to ask about backups, logging, updating, audit logs..
<sarnold> firewalling? apparmor etc MAC configs?
<m_3> sarnold: also comes up a lot in thinking about the gui... how to cull and/or slice and dice your live whiteboard of your infra
<sarnold> m_3: ooof
<m_3> :)
<sarnold> m_3: I'm sure there's something beautiful there to show the layers of aspects on each service .. just good luck finding it :D
<m_3> haha
<m_3> yeah
<m_3> but worth poking at over time
<m_3> sarnold: it's interesting to consider how this concept of aspects fits into integrating juju with existing config mgmt tools
<m_3> i.e., audit logs managed by puppet living together (orthogonally) in a juju-based infrastructure
<m_3> but that's still open (and hard)
<m_3> might be the only practical integration story though
<sarnold> it mght  be the inevitable destination, but my instincts suggest it'll be herculean task to get it right
<m_3> ack
<AskUbuntu> openstack-dashboard login | http://askubuntu.com/q/271382
<sarnold> there's a missing bullet point for "symlinks must be self contained within a charm" here https://juju.ubuntu.com/docs/policy.html
#juju 2013-03-23
<sarnold> is there an easy way to unbust juju's local provider on raring?
<sarnold> or would it be easier to just do everything in a precise? quantal? vm?
<sarnold> (both the raring and the ppa juju's wind up causing python troubles; archive juju leads to whoopsiedaisy problems, ppa juju leads to funny python traces in the juju debug-log, "SyntaxError: invalid syntax")
<_mup_> Bug #1159020 was filed: SyntaxError: invalid syntax <juju:New> < https://launchpad.net/bugs/1159020 >
<Carlo> hi all boys
<Carlo> is there any Italian here XD?
<hazmat> sarnold, i suspect there is an underlying python distro issue
#juju 2013-03-24
<marcoceppi> Is the juju "api" documented anywhere?
#juju 2014-03-17
<davecheney> marcoceppi: can I ask what the plan for testing trusty charms is ?
<davecheney> i have a machine that can only use trusty (don't ask)
<davecheney> so to test the local provider I need a good selection of trusty charms
<davecheney> currently i'm doing this with local forks of the charms
<davecheney> lazyPower: charm-helper-sh
<davecheney> which repo/ppa defines that package ?
<vila> it seems I encounter issues to bootstrap with 1.17.5 https://pastebin.canonical.com/106545/
<vila> 1.17.5 on saucy (my desktop) that is, for the exact same cloud/env I can boostrap from my laptop (trusty) with 1.17.4
<vila> jam: around ? The above is critical for us we have a demo coming like... tomorrow
<jam> vila: investigating
<vila> jam: thanks !
<vila> jam: it may just be a matter of uploading 1.17.5 to whatever is behind tools-metadata-url (shared for the team) but I've never did that before (and hope I was write access there, failing that I'll setup mine)
<vila> s/was/have/
<vila> jam: doing 'juju sync-tools --public --local-dir=. --all --debug -e hp' as a first step
<vila> done
<vila> jam:   concretely  tools-metadata-url: https://region-a.geo-1.objects.hpcloudsvc.com/v1/11289530460295/tools
<jam> vila: http://streams.canonical.com/juju/tools/releases/ doesn't have 1.17.5
<jam> I think we need to poke Curtis (sinzui)
<vila> jam: he's sleeping right now right ?
<jam> yes
<jam> he's UTC-6 IIRC
<jam> Curtis and Ben Howard manage streams.canonical.com
<vila> jam: won't do then :-/ I need something that worked yesterday :-(
<vila> jam: is there a workaround ?
<vila> jam: it's for hpcloud, my understanding is that we already had to setup something ourselves
<jam> vila: you can try "juju bootstrap --upload-tools" which generally requires you to only deploy to a matching series, but should work
<jam> vila: there *are* no official juju tools for 1.17.5 because they haven't been built yet
<jam> there is only the client
<jam> which, we shouldn't be publishing to Trusty until we have tools for it to use
<jam> but...
<vila> jam: ouch, so saucy is busted for now is what you're saying ?
<vila> jam: first things first. Hello there and thanks for helping ;)
<jam> vila: it shouldn't be in Saucy. Are you using a PPA there?
<jam> vila: hi
<vila>  *** 1.17.5-0ubuntu1~ubuntu13.10.1~juju1 0
<vila>         500 http://ppa.launchpad.net/juju/devel/ubuntu/ saucy/main amd64 Package
<jam> vila: "juju/devel" is in there
<vila> jam: is there a way to use a different ppa ?
<vila> meh, is there a different ppa I can use :)
<jam> vila: unfortunately, we probably don't publish 1.17.4 in another place, and ppa's don't keep a backlog :(
<vila> jam: ok, back to --upload-tools, what will that do ? Override tools-metadata-url for the current env ?
<vila> jam: any risk I break something else by trying ?
<jam> vila: that will look on your local system for a 'jujud' binary, package it up into a tarball, and upload it to the private bucket for your environment
<vila> jam: cool, sounds like it, trying
<vila> jam: we deploy against precise though... but worth a try
<jam> vila: so we fake the tools against the LTS
<jam> and for most purposes it works
<vila> 2014-03-17 08:15:13 INFO juju.environs.sync sync.go:176 downloaded tools/releases/juju-1.17.5.1-precise-amd64.tgz (5051kB), uploading
<vila> sounds encouraging
<jam> the binaries are statically linked, so usually the Series would work
<jam> (is unimportant?)
<jam> vila: you won't be able to do stuff like deploy i386 machines, etc.
<vila> jam: ack, only amd64 I'm pretty sure
<vila> jam: I'm not taking you out of bed I hope (lost track of your TZ ;)
<jam> vila: I'm UTC+4 now. Right at noon time
<jam> I need to go make lunch, but otherwise no problem
<vila> jam: ack, thanks, I'll let you know how it goes
<vila> jam: just to summarize, 1.17.5 shouldn't have entered the ppa ?
<vila> jam: or did it enter as part of a bigger work that got interrupted ?
<jam> vila: well, we need to release, so 1.17.5 should be added to the ppa, but IMO it shouldn't be added until we've published the corresponding tools to streams.canonical.com/
<vila> jam: ack, so wip and I was unfortunate to upgrade at the wrong time ?
<jam> vila: well, I think our process is wrong that we exposed you to it, but yes
<jam> hmm.... looks like we did publish tools to s3 http://juju-dist.s3.amazonaws.com/
<vila> jam: ack, we need the devel version, it's unfortunate that we run into this at this time but... if --upload-tools works I'm good and will warn others to *avoid* upgrading
<vila> at least until we get confirmation it's safe to do
<jam> ah, nm, those are "testing" one
<jam> ones
<jam> vila: is it working now?
<vila> jam: oncee the node 0 is bootstrapped I should be fine right ? But I'll need to use --upload-tools each time I destroy the env correct ?
<vila> jam: still uploading, network between me and hp is... slow
<jam> vila: it should use the same tools for everything you deploy, and yes if you destroy the environment, the tools should get removed at the same tiem.
<vila> ack
<jam> vila: I thought you had good upload in Strausberg
<vila> jam: thought so too until I start using hp ;)
<vila> urgh timeout :-/
<vila> https://pastebin.canonical.com/106555/
<vila> re-trying
<vila> jam: re-trying went further, bootstrapped, pfew trying to deploy now
<vila> jam: deployment completed, looks like I'm back in business, thanks for the tip and keep me posted about 1.17.5 progress !
<jam> vila: thanks for being a guinea pig and noticing we broke stuff
<jam> looks like we already uploaded tools for ec2 and canonistack, I don't quite understand why we don't have it for HP and streams.canonical.com
<zchander> ping marcoceppi
<marcoceppi> zchander: pong
<bodie_> boink
<rick_h_> jcastro: blog posts up at http://blog.mitechie.com/2014/03/17/juju-quickstart-and-the-power-of-bundles/ and http://jujugui.wordpress.com/2014/03/17/video-walkthrough-of-juju-quickstart/
<rick_h_> jcastro: want to give a final look before I put them up somewhere?
<jamespage> any charm-helpers around? need an ack on https://code.launchpad.net/~james-page/charm-helpers/ceph-fix-zap/+merge/209539
<mbruzek> Hi james-page  I can take a look
<jcastro> rick_h_, got it
<rick_h_> jcastro: cool, I put up on my twitter/G+ for now. Will try to put on reddit/HN later if you think it's a good idea.
<marcoceppi> jamespage: I'll merge in a few
<jcastro> rick_h_, yeah I'll try to coordinate that
<rick_h_> jcastro: rgr
<bodie_> is there a virtualbox or vagrant image specifically for dev?
<marcoceppi> bodie_: kind of
<marcoceppi> bodie_: https://juju.ubuntu.com/docs/howto-vagrant-workflow.html
<bodie_> ah yeah, I mean for core dev, with go and such
<bodie_> I'm just gonna set one up I think
<bodie_> been having a hell of a time getting 14.04 to work with this
<marcoceppi> bodie_: ah, no, bit one would be nice
<marcoceppi> bodie_: i have experience getting go set up, etc, feel free to ping me if you run in to issues
<bodie_> ok, much appreciated
<hazmat> bodie_, i thought the pinned version on gwacl resolved for you?
<bodie_> yeah, I got that taken care of
<bodie_> it's giving me problems with mongo now I think
<bodie_> setting up a 12.04 box just to make sure
<ghartmann> is there any way to control how a service is exposed in juju ?
<ghartmann> mainly I have my lan on 192.168.0.1 network .. and the exposed juju service is on 10.0.3.x
<marcoceppi> ghartmann: that's the local provider, not exactly ajuju problem
<ghartmann> I wanted to use lxc
<marcoceppi> ghartmann: you'll have to bridge your network
<marcoceppi> with the lxc network
<ghartmann> I see
<ghartmann> I thought that you could expose and he would take care of iptables, etc
<marcoceppi> ghartmann: exposing is something that works against the providers firewall, not at the network level
<ghartmann> ok, makes sense now ! Thanks !
<mbruzek> Hello #juju I am working on the review queue today and can not review my own request.  Can I  get a charmer to review https://code.launchpad.net/~mbruzek/charms/precise/rabbitmq-server/tests/+merge/202573
<lazyPower> mbruzek: i'll take a look at that MP shortly
<mbruzek> ack
* lazyPower changed the topic of #juju to: Reviewer on duty: mbruzek || Welcome!! Docs: http://juju.ubuntu.com/docs || FAQ: http://goo.gl/MsNu4I || Review Queue: http://goo.gl/9yBZuv || Unanswered Questions: http://goo.gl/dNj8CP
<mbruzek> Hello #juju.  The merge proposal https://code.launchpad.net/~hopem/charms/precise/rabbitmq-server/lp1281753/+merge/209320 has passed my +1 review, and needs a review by an official charmer
<lazyPower> mbruzek: can you make cards for promulgation on the board?
<mbruzek> yes will do
<hazmat> rbasak, ping
<rbasak> hazmat: pong. otp, but can chat mostly.
<hazmat> rbasak, was looking at the jujuclient / test_jujuclient.. wrt to mir.. afaicr the underlying ws lib is py2... is that okay re main.. also the test_jujuclient wants an existing juju environment available.. i can write some additional tests that are mock based though
<hazmat> rbasak, basically it expects the user to provide env vars to point to a test env and creds, and then runs against it
<rbasak> hazmat: ah yes. I noticed that yesterday.
<rbasak> hazmat: py2 is fine in main.
<rbasak> (AFAIK)
<rbasak> hazmat: mock tests would be nice. I was going to proceed with the MIR on the basis of what's there now, and see what the MIR approval team has to say about it.
<hazmat> rbasak, okay.. i'll add in some mock tests then (will result in version incr)
<rbasak> hazmat: I'm almost done with preparing MIR reports and fixing up this dependency stack to main quality now. I just have to disable some tests in websocket-client, since some expect echo.websocket.org or something to be reachable.
<rbasak> hazmat: thanks!
<whit> hi all!
<marcoceppi> Hi whit o/
 * whit is looking for antonio @ canonical but doesn't know his nick
<bodie_> howdy
<whit> Hi marcoceppi !
<marcoceppi> whit: arosales is Antonio
<whit> ahâ¦ duh.
<lazyPower> Allo whit o/
 * whit realizes he knew thatâ¦ 
 * whit is a little punch drunk after babysitting a friend's kids late in the night while they went and had a baby
<whit> marcoceppi: danke!
<arosales> whit: hello
<arosales> lazyPower: congrats one your sucessfull application into ~charmers
<lazyPower> Thanks arosales! Very proud to sport the title in my lower third.
<arosales> lazyPower: :-)
<sarnold> lazyPower: woo :) congratulations :)
<lazyPower> sarnold: I owe it all to tiny chocolate donuts
<lazyPower> \o/
<sarnold> lazyPower: hooray for little chocolate donuts!
<lazyPower> I've been waiting forever to use that quote...
<lazyPower> hey marcoceppi, question for you. I've got an open pending MP that extends the config section of nagios. Should I write tests for teh current charm - as-is, or should i point to my branch, write tests for that pending the merge?
<marcoceppi> lazyPower: you can make your branch a pre-requisite for your testing branch
<lazyPower> ack
<lazyPower> talking about in the MP itself right?
<marcoceppi> yes
<lazyPower> kk, i'll do that.
<mbruzek> marcoceppi, How would one make a pre-req
<marcoceppi> mbruzek: when you create the merge proposal you can specify a pre-requisite branch
<mbruzek> Oh cool.
<zchander> ping marcoceppi
<marcoceppi> zchander: pong
<zchander> I four that you have developed a gluster charm? Do you have any info how to use it properly? I keep getting a failed hook
<zchander> flu == found
<marcoceppi> zchander: it hasn't been updated in a long time
<zchander> I noticed.... ;) I am looking for a nice/simple (read: easy) way to create a storage environment to use with owncloud
<zchander> Mmmm, noticed I have a mysql charm which is dying for 6 hours now.....
<zchander> Is there a way to force the charm to stop
<marcoceppi> zchander: yes, juju terminate-machine --force <machine mysql is on>
<marcoceppi> zchander: I would recommend ceph
<marcoceppi> it's way more tested and in the charm store
<zchander_> marcoceppi: I'm back again....
<zchander_> marcoceppi: Just read back the irc log. I am going to try to terminate the charm
<zchander_> marcoceppi: Seems there is no machine assigned to mysql (??)
<marcoceppi> zchander: just do juju destroy-service mysql --force
<zchander_> marcoceppi: error: flag provided but not defined: --force
<themonk> is subordinate charm share same filesystem with master charm?
<themonk> marcoceppi: is subordinate charm shares same filesystem with master charm?
<marcoceppi> themonk: yes
<themonk> marcoceppi: hi thanks :)
<jose> marcoceppi, lazyPower: hey, any of you guys around? I'm having a problem with a nested if statement on the mailman charm
<lazyPower> jose: surely
<lazyPower> jose: link to branch or gist or pastebin?
<jose> lazyPower: https://code.launchpad.net/~jose/charms/precise/mailman/trunk
<jose> when I deploy it, I don't know what happens but says it's not stated, and exits with error code
<lazyPower> jose: josewell, on line 70 your if syntax is incorrect..
<lazyPower> and i suggest encasing your variables with quotes as well.
<lazyPower> if [ "$currentpassword" != "$PASSWORD" ]; then #do stuff fi
<jose> oh
<lazyPower> you have the same thing ocurring up on line 64
<jose> yeah
<jose> after the then, should a 'do' go there?
<lazyPower> are you going to be looping?
<jose> nope
<lazyPower> then i would abstain from using the do operator. Do's are typically used in conjunction with a while statement.
<jose> got it :)
 * lazyPower thumbs up
<lazyPower> go forth and conquer my friend
<jose> hey lazyPower, still having troubles with the password variable on the latest revision in config-changed
<jose> any clues?
<lazyPower> jose: whats the error output?
<jose> lazyPower: something like parameter not set
<jose> but the if is testing if the parameter is null
<lazyPower> jose: ok give me a few i'm not infront of my PC to look over the code
<jose> np
<lazyPower> jose: missing a ; on line 69
<lazyPower> the then statement should be inline with the if as well, but thats coding convention
<jose> urgh
<jose> it's always because of a sign
<jose> last time me code was wrong because of a - was placed instead of a _
<lazyPower> jose: happens to the best of us, best thing you can do when encountering issues like this is take a step back and start unwinding complexity one layer at a time
<jose> will do next time :)
<jose> now let's hope it's good for deployment, trying
<jose> marcoceppi: hey, wanna host that livestream at ubuntuonair?
<jose> I just thought of creating a series of challenges to be done live :P
<marcoceppi> jose: well, I'm doing it through twitch, since I can impose my camera on my feed
<jose> that's right
<jose> good enough
<marcoceppi> jose: this is more like "I think I can do this in under and hour"
<marcoceppi> but I'd love to do some charming based challenges in the future
<jose> link me when it starts, I'd like to see it
<jose> lazyPower: well, it was all when defining the variables: I wrote PASSSWORD instead of PASSWORD
<lazyPower> Right on, i didn't notice that looking over it on my phone. Good catch.
<jose> lazyPower, marcoceppi: when you're on the mood for some charm reviewing, the mailman one is waiting for you
<marcoceppi> jose: mbruzek is the review on duty this week
<lazyPower> jose: outlook is for tomorrow - its getting late on the east coast.
<jose> lazyPower: enjoy the rest of your day!
<lazyPower> I'm only goign to be online for another 30 minutes or so before i head off to go rabble rouse on steam :)
* mbruzek changed the topic of #juju to: Welcome!! Docs: http://juju.ubuntu.com/docs || FAQ: http://goo.gl/MsNu4I || Review Queue: http://goo.gl/9yBZuv || Unanswered Questions: http://goo.gl/dNj8CP
<mbruzek> jose, What can I do for you?
<jose> mbruzek: hey! I was wondering if you could please review the mailman charm
<jose> just did a successful deployment
<jose> https://bugs.launchpad.net/charms/+bug/1199052 should be the bug link
<_mup_> Bug #1199052: New charm: mailman <Juju Charms Collection:New> <https://launchpad.net/bugs/1199052>
<mbruzek> Sure, but it will have to be tomorrow I was also heading out for the day.
<jose> that's fine :)
<jose> enjoy your evening!
<mbruzek> You too.  Thanks for being responsive
<mbruzek> I saw that you updated that today
<ev> is there any way to tell juju not to care about self-signed certs?
<ev> it doesn't like that swift's certificate cannot be verified
<lazyPower> sarnold: why didn't you tell me about ipdb sooner?
<sarnold> lazyPower: because I've never heard of it? :)
<lazyPower> OH MY
<lazyPower> http://blog.dasroot.net/ipdb-the-repl-you-didnt-know-about/
<lazyPower> i find it so awesome, i wrote about it
<sarnold> lazyPower: that's cool! :) thanks
<lazyPower> right?!
<lazyPower> i'm jazzed about this find
<sarnold> okay, scribbled into my ~/.notes file, hopefully next time I need it it'll be at hand :) thanks
#juju 2014-03-18
<rick_h_> lazyPower: bpython :P
<lazyPower> rick_h_: googling - hang on
<lazyPower> sexy, does it have an integrated breakpoint thingy too?
<lazyPower> and yes, thats the techinical definition, breakpoint thingy.
<rick_h_> http://stackoverflow.com/questions/9892751/is-it-possible-to-use-bpython-as-a-full-debugger
<lazyPower> boo - requires a wrapper
<rick_h_> yea, I use it as my python repl but yea ipdb for pdb quick in
<lazyPower> i had no idea this existed, i've been complaining about a lack of pry since i started this journey
<lazyPower> now i can finally shut my face hole and get back to coding
<lazyPower> and with that, its time for me to steam. o/  have a good evening everybody. See you in the am
<lazyPower> davecheney: Hey before i jet, i saw your message. I'm not positive on which package provides charm-helpers-sh but i'm fairly certain its not up to snuff
<lazyPower> https://launchpad.net/ubuntu/precise/amd64/charm-helper-sh - it appears there is a package for it in precise - which release are you using?
<davecheney> lazyPower: not precise
<davecheney> lazyPower: the one we can't announce yet
<davecheney> running on the computer only banks can afford
<davecheney> lazyPower: marcoceppi has a fix in progress https://code.launchpad.net/~marcoceppi/charms/precise/mysql/lp1293315/+merge/211360
<lazyPower> well it appears the last release was for quantal, i'm not seeing it in saucy
<lazyPower> yeah i just got that email. right on.
<lazyPower> Glad to see its being sorted, didn't want to leave you hanging since you asked me implicitly
<lazyPower> ta
<davecheney> lazyPower: no worries
<davecheney> there will be plenty more bugs
<davecheney> while we remove the implicit assumption that there is only one series
<hatch> is anyone else getting agent-state-info: '(error: container failed to start)' with 1.17.5 and local envs?
<hatch> on precise
<davecheney> hatch: can you get some details from ~/.juju/local/log/*.log
<hatch> davecheney sure, checking
<hatch> davecheney all-machines.log is empty
<hatch> machine-0 has
<hatch> 2014-03-18 00:31:59 WARNING juju.worker.instanceupdater updater.go:231 cannot get instance info for instance "": no instances found
<hatch> 2014-03-18 00:32:19 ERROR juju.container.lxc lxc.go:211 container failed to start: container failed to start
<hatch> 2014-03-18 00:32:19 ERROR juju.provisioner provisioner_task.go:407 cannot start instance for machine "1": container failed to start
<hatch> 2014-03-18 00:32:29 WARNING juju.worker.instanceupdater updater.go:231 cannot get instance info for instance "": no instances found
<hatch> and that's repeated for every instance after as well
<davecheney> hatch: brillant
<davecheney> enterprise grade
<davecheney> hatch: what series are you using ?
<hatch> precise
<davecheney> you can't use precise
<davecheney> local provider is not supported on precise
<sarnold> no? aww
<davecheney> kernel is too crap
<hatch> lies! I've been using precise for ever
<davecheney> lxc is too crap
<davecheney> hatch: all i can tell you is what I know
<davecheney> local provider is not supported on precise
<hatch> seriously though, this worked just fine before the latest update
<davecheney> hatch: we're both saying things that are true
<marcoceppi> hatch: delete the files in /var/cache/lxc/cloud-precise and try again?
<hatch> lxc really isn't supported on precise? It's just been working by fluke this whole time? lol
<hatch> marcoceppi trying
<sarnold> the lxc team does have a PPA you can use on precise, and maybe with a newer HWE kernel stack and that PPA it might work, but probably be just as unsupported :)
<davecheney> sarnold: exactly
<sarnold> heh actually I think I used pyjuju and lxc on precise way back when :)
<hatch> marcoceppi looks like that worked!
<marcoceppi> when in doubt, clear the cache
<hatch> haha yeah I totally blanked on that, thanks
<hatch> now I can get back to working on the ghost charm again
<jose> hey guys, any charmers around
<jose> ?
<vladk> jam, good morning
<jam> vila: ping, it looks like official 1.17.5 binaries *were* uploaded to HP cloud, in the official bucket. If you unset "tools-metadata-url" I believe it will select the right bucket for you
<vila> jam: ha ! Interesting, I've been having issues uploading to hp starting... yesterday and I'm trying to diagnose that with support. I will try to unset the tools-metadata-url as soon as I can, thanks for the feedback !
<jam> vila: let me know if it works
<vila> jam: trying, just got 2014-03-18 08:40:13 INFO juju.environs.bootstrap bootstrap.go:58 picked newest version: 1.17.5
<vila> jam: sounds encouraging ?
<jam> vila: I believe if it got there, it found the tools
<jam> though I also see them in the official location finally: http://streams.canonical.com/juju/tools/releases/
<vila> jam: bootstrapped, deploying
<jam> yay
<vila> node 0 agent-version: 1.17.5
<vila> jam: still bitten by the 'broken pipe' on swift upload we do before the deployment :-/
<vila> jam: juju is not involved there, that's the issue I'm tracking with hp
<zchander> Good afternoon, everyone. Anyine around who can help me with Ceph/Juju/MaaS with physical PCs with one drive (only /dev/sda) installed
<jcastro> sinzui, hey do we test on windows 8.1? http://askubuntu.com/questions/435928/installing-juju-client-on-windows-8-1
<marcoceppi> mbruzek: can you review something right quick for me
<mbruzek> yes
<mbruzek> What  is it?
<marcoceppi> https://code.launchpad.net/~marcoceppi/charms/precise/mysql/lp1293315/+merge/211360
* marcoceppi changed the topic of #juju to: Weekly Reviewer: mbruzek || Welcome!! Docs: http://juju.ubuntu.com/docs || FAQ: http://goo.gl/MsNu4I || Review Queue: http://goo.gl/9yBZuv || Unanswered Questions: http://goo.gl/dNj8CP
<sinzui> jcastro, the test is on win 2012 server
<mbruzek> marcoceppi, reviewed.
<mbruzek> marcoceppi, Can you remind me if the UnitSentry.file_contents() runs as root or as ubuntu?
<jcastro> sinzui, I can test on win 8.1
<marcoceppi> mbruzek: everything sentry i sroot
<mbruzek> Ok thanks.
<sinzui> jcastro, I believe win 2012 server is 8.1 without the gui
<jcastro> sinzui, filed a bug, I can confirm we are broken on windows 8.1
<sinzui> jcastro, did you use 1.17.5? Trunk is broken with all windows at the moment
<jcastro> I am using what's on the website, 1.16.6
<sinzui> jcastro, thank you
<sinzui> jcastro, marking a bug confirmed just delays it from being triaged. We see the new bugs first
<jcastro> oh sorry about that, I marked it confirmed because another person had the same problem, I'll remember that moving fwd.
<viperZ28_> Does Juju have an ability to ensure the number of instances requested stays running?
<viperZ28_> i..e. I have a RabbitMQ cluster with 2 units, one of the machines goes down, does Juju have the ability to start another instance or attempt to restart the downed instance?
<sinzui> jcastro, would you be able to also test 1.16.5. I suspect a signing issue, and it may to Bug #1288873
<_mup_> Bug #1288873: Installing juju windows binarys fails on windows 7 <windows> <juju-core:Incomplete> <https://launchpad.net/bugs/1288873>
<sinzui> jcastro, I got signed binary differently for 1.16.6 and 1.16.5 I wonder is a proper signing will fix the issue
 * sinzui is thinking signing is the issue because that cannot be tested
<jose> hey guys, can anyone give me a hand on how should I use the mysql charm?
<viperZ28_> https://www.irccloud.com/pastebin/Asm6AF5E
<jcastro> jose, just ask!
<jcastro> sinzui, that's interesting
<sinzui> jcastro, I am looking for the presigned binary I sent to IS
<jose> jcastro: so, I want to charm openphoto but uses mysql, how do I get the credentials?
 * sinzui removed it from the milestone when the signed version was delivered :(
<jcastro> anything I can do to help? broken on windows is bad
<jcastro> jose, https://juju.ubuntu.com/docs/interface-mysql.html
<viperZ28_> so it looks like Juju does not have ability to enforce runtimes only startup
<jose> jcastro: so if I run `relation-get user` I will get the user for the db as a string? as simple as that?
<jcastro> yessir
<jose> jcastro: was that a yes to viper or to me?
<jcastro> to you
<rick_h_> jcastro: <3 that doc /me wants more!
<jcastro> jose, if you want to debug it live, do a `juju debug-hook` into the unit
<jcastro> and you can run the commands in the hook context
<jose> that's awesome
<jcastro> rick_h_, yeah I wish we had one for every interface
<jose> thanks a bunch, going to work on that now :)
<jcastro> viperZ28_, what are you trying to do?
<viperZ28_> In my test I brought up a multi-node RabbitMQ cluster, I then took one of the machines out using `lxc-shutdown`,
<viperZ28_> Juju did not try to restart the machine or spin up another one
<jcastro> ah yes, there's been some discussion around that
<viperZ28_> I was hoping Juju would sense the downed machine and make an attempt to restart it
<viperZ28_> jcastro: was anything decided?
<jcastro> I am trying to find the discussion
<viperZ28_> I am also looking for plans to integrate with vSphere/ESX stack
<jcastro> yeah I think one of the guys in #juju-dev would know better
<viperZ28_> jcastro: thanks
<sinzui> jcastro, I must congratulate myself in my effort to make sure the non-signed 1.16.6 never got out.
<jcastro> heh
<sinzui> jcastro, I can provide the 1.16.7 that was made when I incremented 1.16.6. I can remake 1.16.6 with the original tarball.
<jcastro> sinzui, whatever you think is best, I only want us to work again on windows
<sinzui> well I should remake and ask for a resigning. We can then test both
<jose> hey, let's say I want to relate mysql and ubuntu for testing purposes, how should I?
<lazyPower> jose: if the charm doesn't implement the expected relationship hooks, what you ask is nigh impossible
<lazyPower> jose: pick a charm that has a :db relationship and you should be able to relate them via juju add-relation
<jose> it's fine, as it's just for testing I'm installing mysql locally
<lazyPower> well i dont understand what you're trying to do, there's no application on either end to support it. If you're just going to get/set the data exchange as an experiment?
<jose> I'm trying to do a deploy of phpbb and wanted to relate it with the mysql charm
<jose> so if it works then I just copy it over
<lazyPower> https://github.com/charms/wordpress/blob/master/hooks/db-relation-changed - here's an example of how the wordpress charm does it
<lazyPower> mediawiki is another good one to reference.
<jose> so, if wp doesn't have a relation with mysql, does it still run?
<jose> (I understand not)
<lazyPower> it doesnt do anything, it no-ops until the relationship is present
<lazyPower> i have a charm written in chef thats optimized for single host / scale out operation - its got a few bugs that need to be worked out though.
<lazyPower> namely, it re-seeds the scale database with seed data on add-unit, and the secret-key is not present until first run, and chef is dealing me fits considering its pre-compiled on run, so i have to figure out how to lazy load it, but thats a horse of a different color
<jcastro> marcoceppi, arosales: ok so I have a card, republish charm school schedule
<jcastro> how do we feel about just declaring "the first friday of every month"? Barring holidays
<arosales> jcastro: are you thinking once per month?
<jcastro> yeah + any ondemand
<jcastro> I have the topic list, we just need to pick the next topic
<jcastro> marcoceppi, how about Juju plugins?
<marcoceppi> jcastro: we can do tath
<marcoceppi> it's a short one
<marcoceppi> one we can probably do out of band
<arosales> jcastro: suggest to schedule 2-3 months out if you are looking at 1 charm school / month
<jcastro> ok
<arosales> we may need to see how if that is too little or just right
<jcastro> hey we never did "troubleshooting Juju"
<arosales> as we do more and record on specific topics though the video resource library grows
<jcastro> that could be a nice 2 parter, go deep, we've never done that before
<jcastro> and it's on the list to do
<arosales> +1 on trouble shooting part 2
<marcoceppi> jcastro: we should do the troubleshooting next
<arosales> +1 on juju plugins
<marcoceppi> we can talk for hours on that
<jcastro> yeah so
<jcastro> plugins
<jcastro> troubleshooting 1
<arosales> osx and windows workflows
<jcastro> troubleshooting 2
<jcastro> and then I can put the rest of the topics on there, see what people want
<jcastro> arosales, ooh, those are good
<jcastro> we should do the workflow one before release
<jcastro> maybe do 2 in april? one with marco(plugins), one with chuck(OSX workflow?)
<arosales> jcastro: maybe sneek in an "intro to charming" every other month around the 15th
<jcastro> lazyPower ^^ what do you think
<arosales> jcastro: that also sounds good
 * lazyPower thumbs up
<jcastro> marcoceppi, ok you are 6 april
<jcastro> lazyPower, you are 11 april
<lazyPower> next month is going to be crazy busy
<sarnold> welcome aboard :)
<lazyPower> jcastro: on the eco calendar?
 * lazyPower didn't look - because lazy
<jcastro> no, Ubuntu Fridge calendar, since it's a public project-wide thing
<jcastro> It will send invites
<lazyPower> hokay
<jcastro> I'll add it to your work calendar too
<jose> jcastro: is it fine if you do it at uonair?
<jose> I'd love to see it there
<jcastro> yeah, I need to know how to use the account though, I don't remember how
<jose> I can walk you through again, I have all the credentials and everything set
<jcastro> actually, if you just jet me a mail that would be <3
<jose> writing that now!
<jose> jcastro: do you still have the password or should I resend it?
<jcastro> I do not
<jcastro> Sorry for the calendar spam everyone!
<sinzui> jcastro, I added a win installer for you to test https://bugs.launchpad.net/juju-core/+bug/1294198
<jose> you have mail, jcastro!
<_mup_> Bug #1294198: Juju package doesn't work on Windows 8.1 <windows> <juju-core:Triaged by sinzui> <https://launchpad.net/bugs/1294198>
<sinzui> jcastro, I am going to do the reverse. I will feed the signed juju into CI. I hope it fails, because I can add a step to retest the signed win installer to ensure we don't post crack
<sinzui> \o/ jcastro the signed win installer is rejected by CI.
 * sinzui tries the previous installer
<mbruzek> I am running trusty and lxc
<mbruzek> Has anyone seen this error
<mbruzek> http://pastebin.ubuntu.com/7115719/
<mbruzek> '(error: container "mbruzek-local-machine-1" is already created)'
<mbruzek> It is preventing my amulet tests from completing.
<mbruzek> lazyPower, marcoceppi ^ ?
<marcoceppi> mbruzek: sudo lxc-ls --fancy
<mbruzek> http://pastebin.ubuntu.com/7115730/
<sinzui> jcastro, CI accepts 1.16.5's win installer. We know the signing is bad, and we can test it
<mbruzek> stopped machines?
<marcoceppi> mbruzek: destroy environment
<mbruzek> I saw this error before and destroyed environment at that time
<mbruzek> I will also run my clean up script to get rid of the lxc
<marcoceppi> mbruzek: after you destroy environment
<marcoceppi> sudo lxc-ls --fancy
<mbruzek> empty after I ran the clean up script
<marcoceppi> mbruzek: try bootstrapping again
<mbruzek> Sure.
<jcastro> sinzui, I will give it a shot
 * sinzui ponders deleting the signed win installer from the release and uploading the unsigned.
<fishy_> Is upgrading lxc on ubuntu 12 going to break everything?  My lxc-ls doesn't have a --fancy and I am jealous
<jcastro> sinzui, works!
<Guest25908> juju sync-tools is struck in lisint target bucket. what could be the problem?
<Guest25908> its not showing any output..
<Guest25908> *listing
<fishy_> sudo lxc-ls --fancy ls: unrecognized option '--fancy' Try `ls --help' for more information. ls: unrecognized option '--fancy' Try `ls --help' for more information.
<marcoceppi> fishy_: lxc-ls --fancy is lxc-ls on precise
<fishy_> that just lists machines, not IPs and such
<marcoceppi> fishy_: well, there's nothing wrong with upgrading lxc that I'm aware of, though you mihgt need a newer kernel
<fishy_> sounds dangerous
<fishy_> real problem:  i started a wordpress guy in a local environment and exposed him ala tutorial.. cannot get to him in browser.    Tried adding IPtables rule like so: iptables -t nat -A PREROUTING -i eth0 -p tcp --dport 80 -j DNAT --to 10.0.3.93:80     .  Still no luck..
<fishy_> lxc is doing something weird with the bridge
<fishy_> by default
<lazyPower> fishy_: can you ssh to the unit with juju ssh wordpress/0?
<fishy_> ya can get in that way
<fishy_> just not though the external IP
<lazyPower> ...external ip?
<lazyPower> OH! you mean you've deployed an lxc container on another machine and you want to route that traffic into the lxc container(s)
<fishy_>         open-ports:         - 80/tcp         public-address: 10.0.3.73
<fishy_> no its on my box
<fishy_> i want to just go over loopback to it
<lazyPower> hmm, whats your bridge interface? lxcbr0?
<fishy_> ya
<lazyPower> and you get timeouts when pinging it? is your network on the 10.0.3.x octet?
<fishy_> i think 10.0.3.x is used by some real machines on my network
<lazyPower> probably having an ip collision then
<fishy_> so pings hit that..  that was my first idea
<fishy_> ok, back to how to change LXC network
<lazyPower> that or the service isn't actually up
<lazyPower> thats configurable in... 1 sec
<fishy_> i do have 1 error in juju status I should look at too
<fishy_>   mysql:     charm: cs:precise/mysql-37     exposed: false     relations:       cluster:       - mysql       db:       - wordpress     units:       mysql/0:         agent-state: error         agent-state-info: 'hook failed: "start"'         agent-version: 1.16.6.1         machine: "2"         public-address: 10.0.3.103
<lazyPower> there's the issue
<lazyPower> wordpress didn't actually do anything
<lazyPower> it noop's on setting up the web interface until the mysql host has joined.
<fishy_> because it couldnt talk to mysql?
<fishy_> ok
<lazyPower> that may be a bit misleading, it does stuff
<lazyPower> but you wont be able to get to it unless that completes successfully
<fishy_> 2014-03-18 19:32:30 ERROR juju.worker.uniter uniter.go:350 hook failed: exit status 1
<fishy_> from the wordpress log
<lazyPower> which hook?
<lazyPower> start?
<fishy_> yes
<lazyPower> try juju resolved -r mysql/0 -- if its a temporary hiccup, it should resolve itself on the re-run of the hook.
<fishy_> 2014-03-18 19:32:27 INFO juju.worker.uniter uniter.go:348 running "start" hook 2014-03-18 19:32:27 INFO juju.worker.uniter context.go:255 HOOK mysql stop/waiting 2014-03-18 19:32:30 INFO juju.worker.uniter context.go:255 HOOK start: Job failed to start 2014-03-18 19:32:30 ERROR juju.worker.uniter uniter.go:350 hook failed: exit status 1
<fishy_> same, need to find real logs
<lazyPower> fishy_: better yet, lets interactively debug it
<lazyPower> juju resolved -r mysql/0; juju debug-hooks mysql/0
<lazyPower> it'll be a race to see if it attaches to the tmux session before the hook re-runs, but 90% of the time, it works
<fishy_> awesome
<lazyPower> otherwise, you just re-run juju resolved -r mysql/0 after you've attached (in a sep. terminal)
<lazyPower> then you can re-run the hook that failed, eg: `hooks/start`
<lazyPower> if you get no output, add the xtrace to the top of the hook "set -eux"
<lazyPower> and then re-run it and see where it turned into a bowl of chunky salsa
<fishy_> mysql/0:start % hooks/start stop: Unknown instance:  start: Job failed to start
<lazyPower> ah
<lazyPower> alter line 5 to read as follows
<lazyPower> service mysql start || service mysql restart
<fishy_> line 5 of what?  in the tmux?
<lazyPower> looks like it never started up in the config-changed hook
<lazyPower> the start hook in hooks/start
<fishy_> mysql/0:db-relation-joined % ./start  start: Job failed to start stop: Job has already been stopped: mysql
<lazyPower> interesting
<lazyPower> marcoceppi: this happened before, it was due to a memory tuning bug right?
<lazyPower> or am i remembering some ancient workaround?
<fishy_> was ./start the right way to execute the start hook?
<fishy_> or do I need to do some more magic
<lazyPower> nah thats it if you're in $CHARM_DIR/hooks
<fishy_> ya
<lazyPower> however, typically you path it with hooks/start from $CHARM_DIR usually, as charms written in bash assume they are being called from $CHARM_DIR
<lazyPower> just a bit of info to keep in mind moving forward
<fishy_> what did set -eux do
<lazyPower> sets explicit, xtrace, and i forget what u does
<fishy_> i know e
<fishy_> cool
<lazyPower> set -u makes Bash check whether you have initialised all your variables. If you haven't, Bash will throw an error about unbound variables.
<lazyPower> ah! thats right. strict mode
<fishy_> nice
<sinzui> jcastro, One more test. IS resigned the installer. https://launchpad.net/juju-core/1.16/1.16.6 has the signed version that CI liked. Does your machine like it too?
<lazyPower> fishy_: well, i dont remember the exact cause of this bug but there is a mailing list post about it
<lazyPower> let me try to fish it up for you
<fishy_> The hook im "stuck on" is db-relation-joined
<fishy_> could it be something that hook did that broke me
<fishy_> that now is making start break
<lazyPower> wait, "stuck on"
<fishy_> my tmux window title
<lazyPower> i dont kno what youv'e done so far, i was under the impression the service was stuck on the "start" hook
<fishy_> is db-relation-joined
<lazyPower> did you exit from the start hook? as db-relation-joined would be the next hook in the sequence.
<fishy_> = thats the step im on?
<lazyPower> that debug-hooks interactive tmux session is like a queue, when you complete one hook context it moves into the next automatically
<lazyPower> and db-relation-joined is what creates the myql user, database, password and hands off the information to the requesting charm, so without an active mysql daemon on the host, it will fail to do anything
<fishy_>       mysql/0:         agent-state: error         agent-state-info: 'hook failed: "db-relation-joined"'
<fishy_> makes sense
<jose> hey guys, how can I make a loop which ends when a statement is true?
<jose> that's in bash
<fishy_> you can ram your statement in a while block
<fishy_> like its 1996
<lazyPower> jose: http://tldp.org/LDP/Bash-Beginners-Guide/html/sect_09_02.html
<lazyPower> fishy_: https://lists.ubuntu.com/archives/juju/2014-February/003421.html
<lazyPower> here's the thread
<jose> uh, looks neat, thanks!
<fishy_> hum my innodb_buffer_pool_size is set to 20 GB, that seems suspicious
<fishy_> investigating mysql
<mbruzek> fishy_, http://tldp.org/HOWTO/Bash-Prog-Intro-HOWTO-7.html
<mbruzek> sorry that was for jose.
<mbruzek> But it looks like lazyPower was all over that one
<jose> mbruzek: thanks! btw, any chance you already checked the mailman charm?
<mbruzek> Not yet, there are a few other charms ahead of it on the queue
<jose> ok then :)
<mbruzek> I will try to get to it as soon as possible.
<fishy_> where do hooks live inside a container
<lazyPower> fishy_: are you still in your tmux session
<jose> fishy_: /var/juju/ I think
<fishy_> nope ;/
<lazyPower> /var/lib/juju/agents/unit-<service>-<unit#>/charm/hooks
<lazyPower> if you're in hook context, you have an environment variable $CHARM_DIR
<lazyPower> so, use either/or depending on context
<fishy_> i dont see any env vars set
<fishy_> after a juju ssh
<lazyPower> right, you *have* to be in the context of an executing hook to get those env variables
<lazyPower> juju is very judicious in keeping its environment to itself.
<fishy_> and debug hooks is how you do that?
<fishy_> got mysql up:  mysql    23066  0.4  0.5 1485292 129720 ?      Ssl  20:01   0:00 /usr/sbin/mysqld
<lazyPower> correct. so long as you're in one of those titled tmux sessions, you're in a hook-execution-context
<lazyPower> otherwise, you have none of the amenities, and its just a plane ole tmux session
<fishy_> fancy
<lazyPower> cool - what was the fix for you? changing the innodb size?
<fishy_> yah from 20GB down to 1GB
<fishy_> and a restart of mysql
<fishy_> however that value is populated seems wrong
<fishy_> my machine has 24GB ram.. but giving 20 to mysql seems excessive
<fishy_> i guess on a 1 GB vm, giving it 756MB wouldn't be bad
<lazyPower> fishy_: i hate to be a bother, but could you file a bug with those findings for me? +1 that it affects you and give us a brief summary of what you did to correct the problem? the more feedback we have on this the better the charm will be for future users.
<lazyPower> https://bugs.launchpad.net/charms/+source/mysql
<fishy_> yes
<fishy_> while I type that up..  any pointers on changing my LXC IP range to not conflict?
<lazyPower> Excellent, ping me with the bug URL after its filed.
<lazyPower> ah right - i forgot about that 1 moment
<lazyPower> fishy_: you edit the network in /etc/default/lxc
<lazyPower> fisy_: http://paste.ubuntu.com/7116089/
<lazyPower> aside from having the bridge be br0 (which unless you create that bridge, wont be available to you) - that's what the config will look like.
<lazyPower> and apparently some stray backslashes got in there when copy/pasting, strange...
<lazyPower> so yeah, disregard the bridge and the random backslashes.
<fishy_> reported https://bugs.launchpad.net/charms/+source/mysql/+bug/1294334
<_mup_> Bug #1294334: mysql charm blows up on out of memory error <mysql (Juju Charms Collection):New> <https://launchpad.net/bugs/1294334>
<lazyPower> fishy_: thanks! You should be emailed when a fix lands and the bug is resolved.
<lazyPower> sorry that was such a long winded fix too - but its intermittant in when it decides to rear its ugly head. I've been tracking that one on and off for about a month now.
<fishy_> i still never saw a good error anywhere
<fishy_> should have ended up somewhere
<fishy_> but at least i have set up mysql enough to know when I see a crazy param
<lazyPower> yeah, that was the proposed work around in the mailing list too was to reduce the innodb buffer pool
<lazyPower> thing is, when i deploy mysql in lxc, i dont see that behavior, and haven't since i was on the 1.16 series of juju
<fishy_> on a local deploy?
<lazyPower> then again, i'm tracking the devel releases now - are you on the ppa:juju/stable release?
<lazyPower> right
<fishy_> do you have a ton of ram on your machine
<lazyPower> 8gb, fairly normal size for a light weight dev rig
<fishy_> ya it may only manifest with a lot of ram
<fishy_> like 24
<fishy_> i am on whatever tutorial said, assume stable
<lazyPower> Well, my juju farm in the closet has 32gb of ram, and doesn't do that either
<lazyPower> yeah, you would be on stable, we dont recommend people use devel unless you want shiney features and the iniability to safely upgrade (or so i've been told, ymmv, i haven't done it)
<fishy_> i think i need   a) machine with 24g ram.   b) swap sapce disabled.     c) about 15gb of other stuff open
<fishy_> could be part of my setup
<lazyPower> well, by default when deploying to lxc i dont see the cause/effect for having a 20gb innodb cache - you're not exactly serving up the wikipedia database - however - there are those using lxc in production
<lazyPower> so thats a slippery slope
<fishy_> well and if i deploy it to a 20GB machine with nothing else on it
<fishy_> nothing better to use it for
<lazyPower> fishy_: if you need anything else dont hesitate to ping! I'm going to stuff my head back in a terminal
<fishy_> assume most 20GB machines want a big cache
<fishy_> ok thanks
<marcoceppi> lazyPower: I've got 32gb, but mysql doesn't have any issues on my machine
<lazyPower> marcoceppi: its strange how intermittant it is. we've confirmed 3 cases of it being silly now
<lazyPower> sebas, daniele, and now fishy - my issues resolved when i jumped to 1.17.4 - not sure WHY that made a difference, and may be coincidental
<varun123> i am trying to install juju, in https://help.ubuntu.com/community/UbuntuCloudInfrastructure inside install juju it says "juju requires ssh keys to be able to access the deployed nodes" here which node does deploy nodes refer to?
<lazyPower> varun123: everything deployed with Juju uses key based authentication. You will not be given accounts with username/password combinations.
<varun123> okay.
<varun123> so we need to create ssh keys for all nodes?
<lazyPower> Juju will handle that for you. When you run juju init on the station you will be performing your orchestration from, the keys are generated and placed in ~/.juju/ssh
<jcastro> I think in the past we didn't generate keys automatically
<jcastro> which is why we say that
<lazyPower> its a good callout too - i can see where people would be confused. Not everybody does keybased authentication, even though they should be.
<lazyPower> </my 2 cents>
<varun123> okay but juju init u meant juju bootstrap? because juju-quick-start doesnt say juju init
<varun123> *by
<jose> guys, should I use db-relation-joined or db-relation-changed for when a DB starts a relation with my charm?
<lazyPower> jose: authors choice. Do you want to run the code on everytime something changes on the db unit?
<jose> not at all, just run the code when the relationship is done to set up some config files
<lazyPower> jose: sounds like a joined hook to me
<jose> ok then :)
<timrc> juju destroy-service really needs --force and --with-fire options.  The --force option is self-explanatory the --with-fire option is basically an alias of --force but conveys a higher level of frustration :)
<timrc> Especially important while developing and testing a charm :)
<marcoceppi> timrc: ;)
#juju 2014-03-19
<jose> hey guys, any known way to run 1.18 on saucy?
<tvansteenburgh> is it legit to call `juju set` from within a charm hook?
<tvansteenburgh> for example, if i wanted to change on a config value on the "other" service when a relation is joined
<tvansteenburgh> change a config value
<marcoceppi> tvansteenburgh: you can't actually do that. Charms can't set configruation for itself or other services
<marcoceppi> tvansteenburgh: if you need to commuicate from one charm to another in a deployed environment you should use relation-set and relation-get
<marcoceppi> jose: 1.18 isn't released yet
<jose> oh
<jose> marcoceppi: mind a quick PM?
<tvansteenburgh> marcoceppi: thanks
<marcoceppi> 1.17.5 is latest devel, 1.16.6 is latest stable
<marcoceppi> jose: sure
<jose> marcoceppi: hey, do you know approximately how much time does the mysql charm take to set the variables (database, user, password...)?
<marcoceppi> jose: not very long, shouldn't take more than a few seconds. If you're thinking about putting a sleep in your script to wait for variables then you're doing something wrong
<jose> marcoceppi: that's just what I did
<marcoceppi> don't
<jose> I did `while [ -z "$DB ]; do sleep 10; done`
<marcoceppi> you won't get them, because relation-get variables are locked at the start of the run of the hook
<jose> hmm, how should I set it, then?
<marcoceppi> they're only updated when a hook is done executing
<marcoceppi> use relation-changed hook
<jose> instead of a relation-joined?
<marcoceppi> right, relation-joined is basically anything you need to do pre-relation
<marcoceppi> it's like an initial handshake
<jose> cool!
<marcoceppi> if you're just waiting for data
<jose> (yeah, you read my mind)
<marcoceppi> put your code in relation-changed, and add an idempotency guard
<jose> a... what?
<marcoceppi> like, db=`relationg-get database`; if [ -z "$db" ]; then exit 0; fi
<jose> oh, gotcha
<marcoceppi> that way, if the relation settings haven't been set the hook will just exit
<marcoceppi> and when relation data does change, the hook will fire again
<jose> btw, let me say this as I *just* did my first juju (unsuccessful) deploy on EC2: it's AMAZING.
<marcoceppi> :D
<bodie_> :}
<jose> if this does good, then you should be have another charm on the queue
<jose> marcoceppi: still around? I'm kinda stuck with a hook
<jamespage> marcoceppi, is amulet still pip installing cherrypy?
<marcoceppi> jamespage: bleh, yes
<marcoceppi> jamespage: next patch will address that
<jamespage> marcoceppi, because its not avaliable on 12.04 right?
<marcoceppi> right
<marcoceppi> well, py3 cherrypy isn't at least
<jamespage> marcoceppi, please - I really like the work mbrusek has done on rabbitmq-server for amulet tests
<jamespage> but I can't run that on our QA cloud with pip installs....
<marcoceppi> right, jamespage i have a 1.4.1 patch release planned for friday, I'll make the change then
<jamespage> marcoceppi, thanks - are you dropping the requirement for it?
<marcoceppi> jacekn: going to use a web framework that's in archive
<marcoceppi> jamespage: ^^
<jamespage> marcoceppi, +1
<jamespage> marcoceppi, how does amulet choose which series to test on btw?
<marcoceppi> jamespage: the test set it (considering the fact that charms are currently tied to series) that might change where series is set by the test runner though
<marcoceppi> no one has had a use case for it yet
<marcoceppi> jamespage: when you create a new amulet.Deployment you can set series as an option, defaulting to precise
<jamespage> marcoceppi, I do - cause althought the branch for the charm is 'precise'
<jamespage> it actually works on 12.04->14.04
<jamespage> marcoceppi, OK - cool
<jamespage> marcoceppi, I can see how I can make that work - thanks!
<marcoceppi> jamespage: can you file a bug for amulet about muti-series testing. I'll target it to 1.5
<marcoceppi> we'll probably need a way to override via env variable, mobile atm so cant file it myself
<jam> vila: just checking in with you that things seem happier on HP for you today
<hazmat> jose, ping.. resolved?
<vila> jam: thanks for checking ! Things stabilized enough yesterday for me to be able to deploy and use the deployment.
<jam> great
<vila> jam: being able to use 1.17.5 without uploading it was key. I'm still talking with hp about the upload issue I face
<hazmat> marcoceppi, why does amulet need a web app? re cherrypy
<hazmat> marcoceppi, oh.. ic.. xmlrpc server is builtin and roughly the same
<jose> hazmat: hey, not exactly... if you have a min I have the links around
<hazmat> jono, sure
<hazmat> er. jose sure
<jose> hazmat: so, I deploy https://code.launchpad.net/~jose/charms/precise/phpbb/trunk and install, config-changed and start run good, but when a DB starts a relation with it I get an error when running db-relation-changed, says something about unbound variable on line 18
<lazyPower> if anyone has time can i get a +1 on https://code.launchpad.net/~lazypower/charms/precise/nrpe/fix-lp-1287393/+merge/209137 ? I need to land this as a dependency on some other stuff in the queue
<jose> oh, morning, lazyPower!
<lazyPower> jose: if a variable is reported as unbound, you're referencing it before it's set
<hazmat> jose. line 18 you've got an extra > at the end afaics
<lazyPower> and good morning :) o/
<jose> hmm, let's change that
<hazmat> jose,  re http://bazaar.launchpad.net/~jose/charms/precise/phpbb/trunk/view/head:/hooks/db-relation-changed#L18
<jose> hazmat: looking at it now! I'm just wondering, does relation-get host with the mysql interface provides the host address? or what's the parameter set?
<hazmat> jose host does work for mysql interface
<hazmat> the mysql server side  sets database, username, password, host atomically
<jose> then I wonder what's the variable not set?
 * jose deploys and does debug-hook
<jamespage> marcoceppi, hey - when I wrote the percona-cluster charm I decided to default all databases to UTF8
<jamespage> marcoceppi, any reason why we should not do the same in mysql?
<jamespage> marcoceppi, I was so busy testing pxc with openstack for icehouse b3 I missed https://bugs.launchpad.net/glance/+bug/1279000
<_mup_> Bug #1279000: db migrate script to set charset=utf8 for all tables <glance> <Glance:In Progress by lzy-dev> <oslo:In Progress by lzy-dev> <glance (Ubuntu):Triaged> <glance (Ubuntu Trusty):Triaged> <https://launchpad.net/bugs/1279000>
<marcoceppi> jamespage: I don't see why not
<jose> guys, I'm having a problem when SSH'ing into a machine... looks like juju is getting the private IP instead of the public IP so it never goes through
<jose> (using EC2)
<lazyPower> thats strange, i've not seen that behavior
<lazyPower> jose: how are you attempting to access the unit?
<jose> lazyPower: juju ssh phpbb/0
<jose> and juju status is the one giving me that info (I'm checking my console atm)
<lazyPower> which release of juju are you running?
<jose> .16
<lazyPower> hmm i dont have a .16 release to check on
<jose> I'm going to try adding an entry to my hosts file
<jose> that may solve it in the meanwhile
<lazyPower> jose: if you can reproduce it, make sure you file a bug on that.
<jose> will do - it's not the first time I've seen this
<rbasak> mgz: around? I think bug 1248283 is critical for MAAS+juju in Trusty, but it isn't milestoned to make Trusty currently.
<_mup_> Bug #1248283: juju userdata should not restart networking <dbus> <deploy> <landscape> <maas-provider> <midway> <ubuntu-openstack> <juju-core:Triaged> <MAAS:Invalid> <dbus (Ubuntu):Invalid> <juju-core (Ubuntu):Triaged> <juju-core (Ubuntu Saucy):Triaged> <juju-core (Ubuntu Trusty):Triaged> <https://launchpad.net/bugs/1248283>
<jamespage> fwereade, ^^
<mgz> rbasak: looking
<mgz> rbasak: I'll set back to 1.18
<mbruzek> jamespage, Thanks for the review on the amulet tests, working on addressing the issues now.
<rbasak> mgz: thanks!
<jamespage> mbruzek, did you see what I was poking at? we should be able to confirm the correct data is set on the relations
<jamespage> mbruzek, excitingly I have a full redux in flight for rabbitmq - want to land your tests first tho!
<mbruzek> Honestly did not, but running them now to see what ceph is passing on the relation
<mgz> I think our issue with that bug is we're actually not sure what to change on the juju-core side... or we have already poked it and our change didn't work, we tried moving when the bridge-utils package gets installed for... basically that reason?
<mgz> but the last comment looks hopeful
<mbruzek> jamespage, What key should I be looking for in the ceph relation?  Is it actually called "key"?
<jamespage> mbruzek, look for 'key' and 'auth'
<mbruzek> jamespage, Running the tests now to get that.
<jamespage> mbruzek, where does stdout go? I could not see the print statement anywhere
<mbruzek> It depends upon how you run them.  If you are running juju test 20_relations_test.py  you need the -v flag
<jamespage> ah - right I see
<mbruzek> I run them differently so the systems are still up after the test is done.
<mbruzek> juju test  <test name> both bootstraps and destroys environment
<mbruzek> jamespage, http://pastebin.ubuntu.com/7119833/
<jamespage> mbruzek, amqp stuff looks good - however the ceph cluster isnot bootstrapped so it won't send the keys
<mbruzek> Ahh I see now, thank you for pointing that out.
<lazyPower> dosaboy: are you around?
<dosaboy> lazyPower: otp but yes go ahead
<lazyPower> dosaboy: i noticed the MP we have reviewed is getting quite a bit of back and forth. touching base with you so we can find a middle fo the road resolution without the delay.
<bloodearnest> howdy. Is there a way for a charm to respond when a floating ip is added?
<bloodearnest> like, does it automatically fire any realtion-changed hooks with a public-address?
<lazyPower> bloodearnest: floating ip on which provider? elasticip from AWS?
<mbruzek> bloodearnest, How would the charm know when that changes?
<lazyPower> mbruzek: good point!
<bloodearnest> mbruzek: indeed, I didn't think so (though that would be ideal from the charms pov, no?)
<bloodearnest> lazyPower: openstack
<mbruzek> If something changes on the relation, a *-relation-changed gets fired yes, but what would trigger that
<bloodearnest> lazyPower: but, interested in the general
<lazyPower> bloodearnest: since juju doesn't spend cycles monitoring the ip table of the machine, i dont think thats in the cards
<mbruzek> bloodearnest, I supposed one could write a daemon that checks the ip, and if somthing changes trigger a relation-changed event or something of that nature
<bloodearnest> mbruzek: right, and that would now include a public-address automatically? Or would you need to provide it explicitly?
<bloodearnest> mbruzek: ack, was thinking along those lines
<bloodearnest> or set the ip manually as a config (ugh)
<mbruzek> bloodearnest, I would have to check, but I believe that the  private address is often part of the relation-changed information.
<lazyPower> ^
<mbruzek> So if you could somehow detect that the ip changed you could fire the relation changed juju event and it would get sent to the related charms.
<marcoceppi> bloodearnest: you could write an out of band check
<marcoceppi> but there is no mechanism in juju that i know of
<bloodearnest> marcoceppi: yeah, I thought so. Just wanted to check if I was missing something.
<bloodearnest> maybe when juju grows a network abstraction, perhaps
<marcoceppi> bloodearnest: its a compellimg case for a unit-changed hook
<bloodearnest> marcoceppi: WANT
<bloodearnest> that could do volume changes too. nice.
<marcoceppi> bloodearnest: you shoild file a bug about ot on core. afaik juju has no way of tracking unit changes, inclidimg address, but given your use case it would make sense
 * marcoceppi is typing from a phone :\\
<dosaboy> lazyPower: sorry lots of calls :(
<dosaboy> lazyPower: so can you remind me which MP this is?
<lazyPower> dosaboy: perfectly ok. ping me when you're off the phone?
<dosaboy> mysql per chance?
<lazyPower> indeed
<bloodearnest> marcoceppi-mobil: ack, will do
<bloodearnest> is there a hack to trigger a hook context from a cron job in 1.16 (i.e. what juju-run does in 1.17)?
<marcoceppi-mobil> bloodearnest: not without a lot of work
<bloodearnest> marcoceppi-mobil: right. 1.18 is due like RealSoonNowâ¢, so maybe I can convince IS to use that in prod :)
<marcoceppi-mobil> bloodearnest: that's whati would do
<zchander> HI, is anyone familiar with preseed settings for MaaS? I want to partition a disk while commissioning
<bloodearnest> marcoceppi-mobil: https://bugs.launchpad.net/juju-core/+bug/1294673
<_mup_> Bug #1294673: Support hook for handling unit changes (like public IP address) <juju-core:New> <https://launchpad.net/bugs/1294673>
<Fishy_> juju ssh only goes to service, not machine?  So if I kill a service (but the machine is still up), no juju way to ssh in?
<lazyPower> Fishy_: just give it the machine #
<lazyPower> eg: to ssh to machine 1 - juju ssh 1
<bbcmicrocomputer> any idea when the OpenStack charms are going to appear for Trusty?
<marco-traveling> bbcmicrocomputer: ask jamespage
<marco-traveling> afaik they already work on trusty, just need to promulgate the as such
<bbcmicrocomputer> marco-traveling: k, cool
<Fishy_> juju ssh 1 Permission denied (publickey,password). ERROR exit status 255
<Fishy_> (whereas juju ssh wordpress is fine)
<mgz> Fishy_: which provider?
<Fishy_> local
<mgz> local you can only use the units
<marco-traveling> Fishy_: is this local provider?
<mgz> Fishy_: bug 1233497
<Fishy_> blah ;)
<Fishy_> that bug is about a password protected ssh key, mine is wide open
<Fishy_> maybe same root cause though
<mgz> yeah, should be another bug too somewhere
<bloodearnest> marco-traveling: so, my bug is duplicate of https://bugs.launchpad.net/juju-core/+bug/1215579
<_mup_> Bug #1215579: Address changes should be propagated to relations <addressability> <juju-core:Triaged> <https://launchpad.net/bugs/1215579>
<jamespage> bbcmicrocomputer, probably at or around the time trusty releases
<jamespage> bbcmicrocomputer, if you want to test the branchs are all under ~openstack-charmers
<jamespage> bbcmicrocomputer, /icehouse suffix
<bbcmicrocomputer> jamespage: k, thanks
<jamespage> bbcmicrocomputer, oh and just to be clear - the precise charms will be the trusty charms
<jamespage> so don't look for 'trusty' in the branch path
<bbcmicrocomputer> jamespage: k :)
<Fishy_> whats it mean if I see 3 charms with the same name in the charm store, but none under reccomended.. and none work from a juju deploy command line?
<Fishy_> some kind of beta charm that hasn't been approved yet?
<Fishy_> i.e.   Location: cs:~canonical-ci/precise/gerrit-59
<Fishy_> deploy did finally work, ignore me
<Fishy_> need full path i guess
<rick_h_> Fishy_: so charms can be customized for various users
<rick_h_> Fishy_: so you'll often see copies of a charm that a user is working on or with
<rick_h_> Fishy_: in this case, you found the charm used by ~canonical-ci
<Fishy_> the source link didn't work for these though.. is there a normal procoedure to take one of these and grab the source?
<Fishy_> so I can tweak for my own needs
<Fishy_> https://bazaar.launchpad.net/~canonical-ci/charms/precise/gerrit/trunk/files
<Fishy_> no worko
<rick_h_> Fishy_: there's a thing in charm tools called 'charm get'
<rick_h_> where you get give it a cs: url and get the downloaded charm
<Fishy_> nice
<rick_h_> Fishy_: oh hmm, that's not public. I wonder why that got pulled in
<Fishy_> the entire charm isnt supposed to be or just the source url
<rick_h_> Fishy_: I'm looking into it.
<rick_h_> Fishy_: you've hit some edge case and I'll need to look into how it got this way
<rick_h_> it's not typical
<Fishy_> apparently I am good at edge cases with juju
<rick_h_> stick around, we like finding those. :)
<Fishy_> if I move my entie infastructure over to juju land, I will be here a few months...
<lazyPower> we like the sounds of that Fishy_
<Fishy_> next up is try to write my own charm for an internal app
<Fishy_> if it's not too bad, then I just need to do it 50 more times
<lazyPower> Fishy_: we also have a testing tool for validating your charms are doing what you expect them to
<lazyPower> Amulet to the rescue!
<Fishy_> What testing tool?
<lazyPower> Fishy_: https://juju.ubuntu.com/docs/tools-amulet.html
<lazyPower> if you need test examples, mbruzek wrote an excellent amulet test in the memcached charm
<lazyPower> he's also got some solid work in rabbit-mq
<Fishy_> nice
<Fishy_> is it insane to try and use "Subordinate services" as kind of mix in behavior?
<Fishy_> like say I have a home-rolled backup service I write...  then have 10 charms, all using my backup service
<lazyPower> not at all. I wrote a subordinate for papertrail-app (a PAAS log aggregator service)
<lazyPower> because I didn't want to pay/manage my own logging infrastructure
<lazyPower> I've used it in 3 production deployments thus far, managing a total of 52 hosts.
<bloodearnest> Fishy_: that's the idea, there is a backup subordinate already around somewhere, but not in the store yet I think
<lazyPower> i throw those metrics in there because it makes me feel special.
<Fishy_> haha
<Fishy_> just need to see how it works mixing in to stuff people already wrote.. I see how it would be easy for my own charms to all put data in a common place so a backup subordinate could grab it..   harder to see how my charm would work on say a wordpress charm vs a mysql charm vs a postgresql charm
<Fishy_> without devolving into a   if postgress:  <>  else if mysql <>
<bloodearnest> Fishy_: the backup subordinate has a relation that the primary charm can implement to tell it what to back up
<Fishy_> ok so then i would need to write my own version of postgresql that supports fishy-backup
<Fishy_> which seems reasonable
<Fishy_> except how to deal with upstream postgresql charm changes
<Fishy_> part of what is nice about using a precanned charm is to get the upstream fixes
<bloodearnest> Fishy_: heh, I would like to use a charm called fishy-backup :)
<bloodearnest> Fishy_: fork and pull changes periodically works pretty well, especially in your case where you would probably not alter the upstream functionality, but add new stuff
<Fishy_> ya that's true
<Fishy_> would at least be an easy merge
<lazyPower> yeah we catch most of the schenanigans during peer review
<bloodearnest> Fishy_: we do that for all charms, and deploy from a local repo. We don't deploy from the charmstore at all
<Fishy_> and juju download is the start of it?  or do you go and find the actual svn/git repo
<Fishy_> assume juju download is just a snapshot and not a repo?
<lazyPower> Fishy_: in what context do you mean? charm get? that fetches teh current snapshot from wherever the parent branch of the charm is located.
<Fishy_> i dont think charm get exists
<Fishy_> in stable juju
<rick_h_> Fishy_: it's in charm-tools package. You have to install that on its own
<Fishy_> genius
<marco-traveling> bbcmicrocomputer: i still think unit-changed would be a sweet hook
<Fishy_> wow people still use bzr
<bbcmicrocomputer> marco-traveling: what happened to charm-helper-sh?
<marco-traveling> bbcmicrocomputer: it was depricated
<marco-traveling> bbcmicrocomputer: only available on precise and in a PPA, but probably won't be going to trusty
<bbcmicrocomputer> marco-traveling: ah ok, thanks
<marco-traveling> bbcmicrocomputer: if you need the ppa, I think ~charmers owns it
 * marco-traveling goes and looks
<bbcmicrocomputer> marco-traveling: ah no worries, it was just my curiosity asking
<marco-traveling> bbcmicrocomputer: really, we're pushing charm-helpers (v2) instead, which will have something soon for charms in shell
<bbcmicrocomputer> marco-traveling: ah nice!
 * alexisb has just deployed her first service in the cloud using juju
<sarnold> woo :)
<alexisb> very good documentation did not take long to figure out, cool stuff
<sarnold> isn't that fun? :)
<sarnold> "ooh I just made a VM spin up a few hundred miles away and now it is doing my bidding!"
<alexisb> it *was* fun :)
<marco-traveling> alexisb: glad you had a good experience! Feel free to ask here if you have any questions :D
<sarnold> wallyworld: your connection is bouncing
<alexisb> marco-traveling, thank you and will do
<jose> guys, if I want my hook to run a python script which does not go to background, how should I do it?
<marco-traveling> jose: do you want the hook to block or not on the script? If the former, then just run it, if the latter write an upstart script
<jose> uh oh, I think not, I'll check how to write an upstart script then
<jose> marco-traveling: that phpbb charm I was writing, I'll need to stop as the feature we're looking for to get the DB info in place is not there yet
<jose> but the base is there, maybe in the future I can re-do it
<marco-traveling> jose: upstart is pretty simple if you need any help #upstart or ping me
<jose> cool, thanks!
<jose> marco-traveling: would a charm be approved if the user has to go and manually get the database values and set them by itself?
<ev> does the charm hook context not pull in /etc/environment?
<marco-traveling> jose: yeah, that won't fly
<marco-traveling> ev: probably/maybe not?
<ev> boooo
<webbrandon> any juju contest going on?
<marco-traveling> webbrandon: not at the moment
<webbrandon> :( `
<marco-traveling> webbrandon: there's an Ubuntu Touch app development competition going on
<ev> marco-traveling, others: what's the preferred way of setting some environment data for all charms in a deployment? Say I'm at a very security conscious company and they have a http proxy for all traffic. I cannot just set the juju http-proxy config option because juju doesn't also have a no_proxy setting for putting the IPs of swift, glance, etc in.
<marco-traveling> ev: uh, that's a good question
<marco-traveling> ev: so, there's set-environment, but that's really only for the juju deployed environment
<marco-traveling> not like, environment variables
<marco-traveling> or, maybe that command does do that
<marco-traveling> ev: let me check
<marco-traveling> ev: yeah, not, it's just to modify the juju environment, not environment variables
<marco-traveling> ev: so, my first though it like a proxy subordinate, that can talk you can set filter based rules on
<marco-traveling> err, not subordinate, service
<marco-traveling> so set http proxy to this deployed service, then update it's configuration with filtering rules on which ips to send to the actual proxy and which shouldn't be
<marco-traveling> but that sounds messy
<ev> marco-traveling: I have the ability to bake the image
<ev> but I need to know where to stick http_proxy https_proxy, socks_proxy, no_proxy, etc so juju respects them
<ev> as when I put them in /etc/environment in the baked image, it didn't find it's way into the environment for $charm/hooks/hooks.py
<ev> oh
<marco-traveling> ev: I'd ask around in #juju-dev
<ev> okay, thank you
<marco-traveling> There might be a place that you can place these environment variables within juju that the hooks will pick it up
<Fishy_> install is ran by user root, but start is run by user ubuntu?
<Fishy_> is that the convention
<Fishy_> or anything except install is run by ubuntu
<peterklipfel> hi all, I made an oops, and now I have a mess. I accidentally spun up too many lxcs, my computer exploded, so I killed all the juju processes and deleted the lxcs. When I run juju status, I see a bunch of non-existent containers. The processes are still around, but they won't die (I assume there's a daemon respawning them). How do I reset everything?
<Fishy_> sudo juju destroy-environment
<peterklipfel> Fishy_: that doesn't work. I get "ERROR failed to rename container directory"
<Fishy_> next time dont delete stuff by hand ;) messed somethin up
<Fishy_> have not seen that error though
<Fishy_> (yet)
<Fishy_> for juju could just blow away ~/.juju directory
<lazyPower> peterklipfel: use --force
<Fishy_> but lxc has stuff elsewhere
<lazyPower> peterklipfel: juju destroy-environment local --force
<peterklipfel> ok. Actually, I was almost there. I just had to go in and manually recreate the directories (empty) and then run destroy-environment until it succeeded
<peterklipfel> the --force flag made juju complain. It doesn't recognize it
<lazyPower> peterklipfel: are you on the 1.16 series of juju?
<peterklipfel> lazyPower: yes
<lazyPower> ah ok, thats a recent convention
<lazyPower> hang on there are instructions for cleaning up a broken local install
<lazyPower> 1 moment while i fetch them
<lazyPower> peterklipfel: http://askubuntu.com/questions/403618/how-do-i-clean-up-a-machine-after-using-the-local-provider
<peterklipfel> lazyPower: thanks!
<lazyPower> peterklipfel: np, let me know if you run into any issues. I've gotten my hands dirty quite a few times with LXC  misbehaving
<Fishy_> how do I tell the juju that I have something running on port 80
<Fishy_> so it shows up on status
<Fishy_> and gets exposed
<Fishy_> provides:   website: http
<lazyPower> Fishy_: open-port exposed
<lazyPower> er
<lazyPower> wow
<Fishy_> ?
<lazyPower> Fishy_: open-port 80 in your hooks.
<Fishy_> start hook?
<Fishy_> or install
<lazyPower> that works.
<Fishy_> i guess install
<lazyPower> I typically put them in config-changed if i have an ssl option so i can switch the ports on the fly
<lazyPower> or if there is a port setting, i can react on it. but if its always going to be port 80, put it wherever it makes sense.
<marco-traveling> Fishy_: you typically want to open-port when the service is ready to be used
<Fishy_> open-port : command not found
<Fishy_> I am in bash land
<lazyPower> Fishy_: it has to be executed within hook-running context
<marco-traveling> Fishy_: so if after installation its ready, install hook. If you have to start the service, then start hook, if it requires a database, then the db-relation-changed hook
<Fishy_> o okay
<Fishy_> not ready till start happens
<marco-traveling> Fishy_: then open-port 80 in the start hook should suffice
<Fishy_> config changed is good idea when I tie it to the config
<Fishy_> but thats step 11
<marco-traveling> Fishy_: yeah, like lazyPower said, if you can configure the port then it should /definitely/ be in the config-changed hook instead
<marco-traveling> but as you mentioned, you can iterate on that later
<Fishy_> is the juju master node a single point of failure in this whole operation?
<marco-traveling> Fishy_: it is at the moment, we have HA landing for the bootstrap node in the next week or two
<marco-traveling> Fishy_: and it's a point of failure in terms of controlling the environment, if the bootstrap node goes down the other services will continue to run
<marco-traveling> you just wont' be able to orchestrate anymore (deploy, relate, expose, destroy, etc)
<Fishy_> and if i brought it back up, it wouldnt re-discover what was done
<Fishy_> so potentially lost track of what my machines are doing
<Fishy_> cool open-port works
<marco-traveling> Fishy_: well, if you rebooted the node
<marco-traveling> Fishy_: it would pick up where it left off
<marco-traveling> like, if there was a networking outage or something similar
<marco-traveling> but if it was deleted, then there's really no way to recover from that at the moment
<marco-traveling> with HA you'll be able to deploy and scale your bootstrap node, so if you wanted HA you could have three bootstrap nodes, if one goes down the other two will continue serving
<Fishy_> ya that would be good long term
<Fishy_> for sure
<Fishy_> master - slave ?
<Fishy_> or can i run commands from any of them
<marco-traveling> Fishy_: well, you don't run commands from the bootstrap nodes
<marco-traveling> Fishy_: the bootstrap nodes expose an API server that the CLI and GUI, etc talk to
<Fishy_> ya but can you talk to all 3 of the HA
<Fishy_> or just 1
<marco-traveling> Fishy_: all three, but juju will just select one
<bodie_> do I really need to use the gophers/go PPA to get lbox or can I just go get launchpad.net/lbox?
<marco-traveling> bodie_: the latter is better
#juju 2014-03-20
<themonk> I am facing a problem, my conf.tmpl file has some variable comes from config.yaml and last 2 are comes from relation problem is after relation joined if i change configuration then those 2 relation dependent variable gone missing. how do i solve this? need help
<_sEBAs_> some soul, can tell me why through juju debug-hooks the `config-get' commands like are not working?
<_sEBAs_> please :P
<_sEBAs_> Im trying to debug a hook, but its impossible if it can even run the charm commands like "config-get"
<yolanda> hi jamespage, stub
<jamespage> hey yolanda
<stub> So rather than extending the PostgreSQL db relation to be able to specify multiple databases and users, and return multiple passwords to the clients, I was thinking just using multiple relations would be better
<stub> Each relation stays simple, one database and user. If your charm needs two, it opens two relations and gets exactly the same information. Less work probably as it doesn't need to decode the more complex data structures.
<stub> This also means other charms, like the pgbouncer connection pooler, don't need to be updated to support extensions
<yolanda> jamespage, i think it could have sense and looks cleaner actually
<jamespage> ok - this sounds reasonable
<yolanda> we can apply that to postgresql alternative at the moment, not touch mysql
<stub> If it is going to cause scheduling issues, or it just isn't going to work, we can proceed with yolanda's work mimicing mysql's interface. But it will mean compatibility issues with pgbouncer, and we might want to flag the interface extensions as experimental or temporary.
<yolanda> stub, i think that at the moment we are fine implementing new approach, postgresql is a new feature so we can work on the best approach
<stub> Ok. Let me know how you go or if you want any help. I'd rather be an enabler than a blocker :)
<jamespage> yolanda, stub: sorry - ended up with two kids for a minute then!
<jamespage> yolanda, lets take this approach - its different to mysql BUT this should only apply in the nova-cloud-controller charm
<jamespage> all other charms really only need one db access
<yolanda> yes, it's an special case
<jamespage> yolanda, nova-compute and quantum-gateway might think they do but they don't
<yolanda> what do you mean?
<jamespage> yolanda, those two charms I think request multiple db's
<jamespage> yolanda, I noticed this during ssl-everywhere - they don't need them
<jamespage> nova-compute does not need the db connection at all in later releases
<jamespage> quantum-gateway just needs nova
<jamespage> yolanda, I fixed both of these in the ssl-everwhere branches - we have alot to land still
<yolanda> jamespage, so is that better to use ssl everywhere as base?
<jamespage> yolanda, not yet
<yolanda> well, dealing with nova-c-c at the moment
<jamespage> yolanda, get that up on branches and we can then review for the other ones
<yolanda> jamespage, currently working on personal one: lp:~yolanda.robla/charms/precise/nova-cloud-controller/postgresql-alternative
<jamespage> yolanda, +1 that's fine
<yolanda> doesn't look bad but i have to deal with the neutron postgres settings, still pointing to sqlite
<jamespage> yolanda, I proposed https://code.launchpad.net/~openstack-charmers/charm-helpers/active-active/+merge/211285
<jamespage> I think that's good now
<yolanda> cool! it has been hard
<yolanda> did you do any more updates?
<yolanda> jamespage, also, about automatically detecting rabbit failures and switch, it works much better on icehouse. In havana i only see nova-compute to work fine
<jamespage> yolanda, yeah - I think the kombu in 14.04/icehouse is better at dealing with this
<jamespage> yolanda, infact I'm going to merge the active-active branches into the icehouse branches to test
<yolanda> ok
<yolanda> the way i tested it is using icehouse for rabbit, and havana for the others
<timrc> How do I debug: 2014-03-20 15:03:42 ERROR juju.cmd supercommand.go:293 environment has no bootstrap configuration data -- I'm using a fresh config ala juju generate-config and the juju switch'ing to 'local'
<Tug> Hi, I found out about juju very recently, looks promising! I have a few questions of my own (so from a juju newbie)
<Tug> for instance I'm wondering about the capablities of the local provider
<Tug> It probably won't virtualize a load balancer like the one cloud providers usually have (ELB on AWS) right ?
<timrc> strace to the rescue
<Fishy_> So say I want to use MAAS to deploy a group of servers, and then put 3 or 4 juju LXC containers on each... is this sane, or is mixing MAAS and LXC not good
<Fishy_> or do I do an open stack thing
<Fishy_> and bake that on top of MAAS servers
<Fishy_> basically i think I can turn all of my apps into charms, which is good. trying to figure out how to design the rest
<Fishy_> 10% will need to run directly on the hardware.  90% I want to run in LXC, on servers that get set up via MAAS or some kind of magic
<bloodearnest> Tug: correct, in fact no environment type can "install" SaaS like ELB. Charms can have support for it explicitly, but I don't think that would work in local provider
<bloodearnest> Tug: but the future roadmap has the idea of "virtual charms", which are in your environment and act as a gateway/proxy to SaaS, to you can relate to it, configure it, etc
<Tug> bloodearnest, interesting !
<Tug> bloodearnest, for now a classic charms won't be able to do any network configuration if I use a local provider or vagrant right ?
<bloodearnest> Tug: can you clarify what you mean by network configuration? Some things are likely possible, others not
<Tug> ok let's take an exemple, the mongodb cluster charm
<Tug> you have to somehow configure each shard and mongos etc
<Tug> so each machine has to be aware of the global configuration I guess
<bloodearnest> Tug: so, mongodb-cluster is a "bundle" that deploys 5 services using the same charm (mongodb), but configuring and relating them differently
<bloodearnest> Tug: the relations add the ip/port config between machines, and each services config controls overall settings for the service
<Tug> bloodearnest, ok thx for the explanation
<bloodearnest> Tug: so in terms of ip address, there's nothing you need to do, except maybe expose some ports externally
<Tug> ok so juju would be able to configure this for each vagrant vm automatically ?
<Tug> I guess it works like the MAAS
<bloodearnest> Tug: yes, that would work on vagrant/lxc/whatever
<Tug> cool, I have to try it then!
<bloodearnest> Tug: the relevant provider implementation knows how to setup local networking
<Tug> Thx, bloodearnest
<bloodearnest> Tug: your welcome
<bloodearnest> you're*
<bloodearnest> Fishy_: sounds like you want the manual provider in 1.17
<bloodearnest> Fishy_: and deploying LXC containers on MAAS controlled machines should work fine
<Fishy_> but i only want 1 environment right?
<bloodearnest> Fishy_: yes - type: manual
<bloodearnest> Fishy_: you add machines/lxcs to it manually
<Fishy_> okay
<bloodearnest> and then do juju deploy --to=<machine id>
<Fishy_> " This is useful if you have groups of machines that you want to use for Juju but don't want to add the complexity of a new OpenStack or MAAS setup"
<Fishy_> but im ok with an openstack or MAAS setup
<Fishy_> I just need to sometimes deploy to hardware, sometimes to LXC
<Fishy_> and I want the hardware to be set up by something, like MAAS
<bloodearnest> Fishy_: currently, non-manual single environment can only deploy to a single type of vm abstraction
<bloodearnest> e.g openstack, ec2, azure, local container (on same machine, lxc or kvm)
<bloodearnest> Fishy_: to mix and match within one env, you will need the manual provider I think
<Fishy_> ok so if I go manual, I will need to do all the LXC stuff myself?  what it does in local for me already
<bloodearnest> yes, I'm afraid
<bloodearnest> Fishy_: local provider will only deploy on the local machine AFAIK
<Fishy_> maybe better to just run 2 environments?
<Fishy_> and call switch a lot?
<timrc> This documentation: https://juju.ubuntu.com/docs/config-LXC.html seems at odds with today's reality
<timrc> for example sudo juju bootstrap for local is met with an error claiming you should not bootstrap as root
<bloodearnest> Fishy_: it's generally better to have 1 env per "service" I think
<bloodearnest> Fishy_: e.g. we have maybe 20+ production environments
<Fishy_> well we have 20 services... 1 needs bare metal, 19 can be on LXC
<bloodearnest> varying from a half dozen machines to 40 or so
<Fishy_> its different things
<bloodearnest> Fishy_: yes, ours are all different services. The do interact, but not via juju (yet)
<Fishy_> i would just have prod and qa... and then the two types mentioned
<Fishy_> hum
<bloodearnest> timrc: yeah, that's for 1.16, sounds like  you're on 1.17
<Fishy_> i dont need interaction via juju
<Fishy_> i just need the machines set up
<bloodearnest> Fishy_: in that case I suggest one env per service
<bloodearnest> Fishy_: we have 2 per service - staging and prod
<timrc> bloodearnest, Correct.. 1.17.4-trusty-amd64
<Fishy_> okay so a MAAS service can deploy 20 nodes for me...  5 of those bare metal to install my perf apps, and 15 of them future VM hosts..  how do I do the VM host step?  Make a charm that sets up an openstack or something?
<timrc> bloodearnest, How do environments.yaml and environments/*.jenv's jive? The local.jenv file was completely empty and I had to add things by hand... bootstraping is still crashing and burning atm, but I feel like I'm missing some fundamental step here
<bloodearnest> timrc: don't mess with environments/*,jenv - they are autogenerated and used for introspecttion by tools
<Fishy_> or I guess I could just make everything MAAS.. but then deploy 5 apps to the same MAAS server.. but that seems against the juju style
<bloodearnest> Fishy_: there is a well tested openstack set of charms that we use on MAAS to deplpoy openstack
<Fishy_> oh okay so I could make my MASS VM host guys use openstack
<bloodearnest> timrc: for 1.17, just do juju bootstrap, it will prompt for sudo when needed
<Fishy_> then my openstack environment hits that ?
<bloodearnest> Fishy_: sounds right
<Fishy_> will that use LXC?
<timrc> bloodearnest, I didn't want to mess with it but when I went to bootstrap it said their was not bootstrap-config data
<Fishy_> under the covers
<timrc> bloodearnest, and the local.jenv file was empty :(
<bloodearnest> Fishy_: I believe openstack can use lxc as a machine type, yes
<bloodearnest> Fishy_: but it defaults to kvm
<bloodearnest> Fishy_: there was a thread about this recently on the juju mailing list
<Fishy_> or is there an alternative to openstack
<Fishy_> that will use lxc
<bloodearnest> timrc: try juju destroy-environment local --force && juju bootstrap
<bloodearnest> Fishy_: not that I know of
<Fishy_> openstack bundle is 19 charms
<Fishy_> that seems pretty intense
<bloodearnest> Fishy_: openstack is notoriuosly difficult to deploy. Juju is the one of the easiest ways atm, AIUI
<bloodearnest> Fishy_: the manual provisioning is not so difficult to use
<Fishy_> ya okay that is sounding better
<Fishy_> i just like how simple local is
<Fishy_> i want to apply that to many machines
<bloodearnest> Fishy_: say you spin up 30 lxc containers across your cluster
<Fishy_> i always know what box what app will run on
<bloodearnest> create a manual juju env
<Fishy_> so when app A needs to deploy, I know I want it on VM host 123
<bloodearnest> Fishy_: right, so --to is your friend
<Fishy_> ya
<timrc> bloodearnest, I got some very informative: ERROR exit status 1's
<bloodearnest> Fishy_: it's the only way to get fine grained control on placement
<bloodearnest> timrc: lols, I get those too sometimes. A bootstrap should still work even when the env is not destroyed 100%, it will just pave over
<timrc> I wish
<Fishy_> it does blow up hard if you are in the .juju directory
<Fishy_> when you destroy
<Fishy_> and try to re-bootstrap
<timrc> juju bootstrap
<timrc> ERROR environment is already bootstrapped
<timrc> so it looks like it can't actually destroy the environment
<bloodearnest> Fishy_: caveat - the manual provider is considered beta, AIUI
<bloodearnest> timrc: so, I've had this on occasion
<timrc> this worked reasonably well a year or so ago
<timrc> I guess a lot has changed
<bloodearnest> timrc: rm -rf ~/.juju/local/*
<Fishy_> kind of thinking MAAS now, and just shove 5 apps to the same host and forget about vms
<bloodearnest> has worked for me in getting juju to recognise that the env is really actuall dead
<Fishy_> going in circles..
<timrc> bloodearnest, Oh, I've done that... that will get the bootstrap to start but dies after mongodb db... I'll check the log.. seems to be a bit of a mess
<bloodearnest> timrc: nasty.
<Fishy_> mongoDB is a mess, I am in the process of migrating away ;)
<bloodearnest> have you done an update recently? 1.17.5 is out
<timrc> bloodearnest, well I updated today and got 1.17.4
<bloodearnest> timrc: right, 1,17,5 must not be in trusty yet
<bloodearnest> timrc: so I have encountered this issue of mongo just dieing before
<bloodearnest> but don't know the fix
<Fishy_> can look in mongo logs?
<timrc> apt-get --purge mongodb-server :)?
<timrc> er purge*
<timrc> well canonistack is back, I think, so I'll switch back to that... really wish I could get lxc/local provider working
<Fishy_> installing juju actually crashed on me on day 1
<Fishy_> becuase I had mongodb already installed
<Fishy_> it totally crapped the bed
<Fishy_> but I blame mongo
<bloodearnest> Fishy_: so some one just posted to the juju list descriing some thing similar to your problem - was that you? :)
<Fishy_> yes
<Fishy_> no offense but i wanted hivemind thought
<Fishy_> see what else we are missing
<bloodearnest> Fishy_: oh no offense taken, there are many people more qualified that me to answer your questions :)
<rick_h_> lies bloodearnest or bust!
<hazmat> geekmush, ping
<hazmat> geekmush, wondering if we can do an interactive session.. i think theres's something getting lost in the translation
<geekmush> ?
<hazmat> geekmush, re do plugin
<geekmush> I havent' had time to swing back around to juju yet, sorry.
<sfeole> timrc-afk: 1.17.5 should fix your local provider issues
<sfeole> timrc-afk: need to add the juju/devel ppa
<hazmat> geekmush, sorry, i was thinking about a different gh ticket... and conversation
<geekmush> hazmat:  heh, no problem â¦ I was kinda wondering â¦ then, again, I *could* have been sleep juju'ing ..  :)
<Fishy_> im going bold and switching to dev
<Fishy_> seems more interesting
<marco-traveling> Fishy_: it's pretty stable, and great to use of you are just trying out
<marco-traveling> Fishy_: but if you are doing production deployments use stable because that has an upgrade path
<Fishy_> ya i need to figure out if juju is the right path still
<Fishy_> before I start prod
<marco-traveling> Fishy_: then use devel
<marco-traveling> it's got all the features coing in the next stable
<Fishy_> So tell me about upgrade..  is the idea you update the core binary in your charm, but without blowing away and recreating the entire VM?
<timrc> I (seemingly) randomly started getting: 2014-03-20 18:03:19 WARNING juju.environs open.go:258 failed to write bootstrap-verify file: cannot make Swift control container: failed to create container: 595054e0e7b048cb87887a0b3d7bc663 when I attempt to bootstrap to an openstack provider... has anyone seen this? I currently have 1.16.5-trusty-amd64 but same thing with 1.17.4 -- I juju init'ed a fresh JUJU_HOME... the folks that manage swi
<timrc> ft don't seem to think its a problem on their end... at least other users are not reporting the prolem
<onrea> What is the public address of Wordpress when installed by juju?
<onrea> ==> http://askubuntu.com/q/436975/152405
<onrea> lazyPower: ^
<lazyPower> onrea: your AU question references discourse, and you're asking about wordpress
<lazyPower> i'm confused on which you want answered
<onrea> It's because people are not familiar with Discourse
<onrea> However, I'm trying with wordpress, too. same result
<onrea> They will pass the question when see 'Discourse' in the title!
<lazyPower> hatch: something went awry with this installation of discourse that onrea posted about however - his port 80 was never opened... i dont see anything about a hook error
<lazyPower> i suspect what happened is it ran into an error, and it was juju resolved without any actual intervention being done.
<hatch> lazyPower oh I didn't even notice that - I just assumed that he was running local so it couldn't open it
<lazyPower> it will open it regardless if its done, you'll still see it int he status output
<hatch> lazyPower ahh I see your comment, cool I'll take a look in a second
<hatch> upvoted
<hatch> thanks for expanding
<lazyPower> thanks for the upvote :-)
<dpb1> hi -- in ec2, I'm boostrapping with some storage created in the same region.  It's attaching to my bootstrap node automatically.  Is this expected?
<marco-traveling> the discourse charm is broken
<marco-traveling> there's is a reason why it's not in the store yet
<lazyPower> marco-traveling: ah that would explain it
<marco-traveling> it's not compatible with latest upstream
<lazyPower> it has a rev installation target though right? and defaults to that tag?
<lazyPower> iys been about 4 months since i dove into it last
<marco-traveling> yes. so you could use it with an older working version
<lazyPower> ok ill ammend the comment to reflect those details after dinner
<lazyPower> ty marco
<lazyPower> dpb1: well, that is odd.
<lazyPower> dpb1: i cant say that I have bootstrapped with storage in teh same region though - you mean an EBS volume provisioned, but "free" in the listing right?
<dpb1> lazyPower: ya, that's right
<dpb1> lazyPower: has to be in the same AZ even
<lazyPower> ok, let me see if i can reproduce
<lazyPower> what version of juju are you on?
<_2_Heben2> Mario
<_2_Heben2> Mario
<marco-traveling> Luigi
<marco-traveling> Luigi!
<lazyPower> dpb1: well, it didnt auto map for me
<dpb1> :(
<dpb1> I'll see if I can get it more reproducable...
<lazyPower> dpb1: stable or unstable series of juju?
<dpb1> 1.16.x
<lazyPower> ok, i'm running the unstable series too, so that's another factor to consider.
<dpb1> lazyPower: thx, I appreciate you trying.  I'll reply to that email thread if I find anything more concrete.
<lazyPower> dpb1: np. happy to help
<davecheney> marco-traveling: o/
<davecheney> what would it take for you to promulgate the mysql charm for trusty
<davecheney> ?
<marco-traveling> davecheney: tests
<marco-traveling> davecheney: I've almost got them all written, should be landing next week
<marco-traveling> then we just need to run the tests against a trusty bootstrap
<marco-traveling> and if it passes I can promulgate
<marco-traveling> davecheney: but as for physical limitations, there are none
<marco-traveling> we /could/ do it right now
<davecheney> marco-traveling: /would/ you do it for me ?
<davecheney> I have environments which have not valid precise images
<davecheney> so I need trusty charms
<marco-traveling> davecheney: ehhhhhhhh I would love to, really I would, but it kind of flys in the face of the whole "no trusty charms without tests"
<marco-traveling> davecheney: what about another charm, like memcached, rabbitmq-server, mediawiki, etc?
<marco-traveling> those have tests that we could spin up against a trusty series
<davecheney> marco-traveling: i'll take anything you have
<davecheney> right now i ahve the ubuntu charm
<davecheney> and that a convicing demo doth not make
<marco-traveling> davecheney: sure, I'm on an airplane right now, but I can have about 4 or 5 charms promulgated to trusty after they pass tests on trusty tomorrow morning (which kind of sucks for you) so maybe tonight if I have the energy for it
<sarnold> heh, 'traveling' is quite specific then :)
<davecheney> marco-traveling: oh right
<davecheney> you literally are traveling
<davecheney> whatever you can promulgate would be awesome
<davecheney> currently of course deploying  from local:
<marco-traveling> davecheney: while we're here, can you verify if float is a valid configuration type?
<davecheney> marco-traveling: good question
<davecheney> my intial answer is no
<davecheney> only because I've never seen it used
<marco-traveling> its' in the docs, but I don't trust that
<marco-traveling> yeah, mine as well
<marco-traveling> I thought it was only int, string, bool
 * marco-traveling would love enum
<davecheney> marco-traveling: i'd bet a small developing enconomy that any instances of floats are actually "0.1"
<davecheney> lemmie check
<davecheney> marco-traveling: looking at the code
<davecheney> ints, bools, strings
<davecheney> and maps and lists composed of those primatives
<marco-traveling> davecheney: cool, I'll update the docs
<davecheney> marco-traveling: let me test this
<davecheney> but yes, no floats
<davecheney> they should be expressed as strings
<marco-traveling> davecheney: wait, what? we can do lists?
<davecheney> the net effect will be almost the same
<marco-traveling> davecheney: sure, I get that
<marco-traveling> but what is this maps and lists you speak of
<davecheney> marco-traveling: it's the same logic that ingests environments.yaml
<jose> hey marco-traveling, do you know if there's a way for a charm related to another one to say 'hey, I want you to generate another user named abc'?
<davecheney> they aren't available inside metadata.yaml
<davecheney> sorry
<davecheney> i mean you can write them
<davecheney> but it won't validate
<davecheney> and won't make it through config-get
<marco-traveling> davecheney: oh, right, i see
<marco-traveling> davecheney: I'll open a feature request for an ENUM config option, doubt it'll go anywhere, but having strict set of values would be nice imo
<marco-traveling> jose: you could do that in the interface, if you create one
<jose> do you have an example? trying to do that for postfix and the reddit charm I will *try* to write
<marco-traveling> jose: so, if you're creating an interface, just have a key that accepts either a comma seperated list of users
<marco-traveling> then the charm can chop the list up and figure out which which isn't created and which has, etc
<jose> is that possible in Bash?
<marco-traveling> jose: yes
<jose> I'm checking the relation docs atm, I just found out about relation-set
#juju 2014-03-21
<Tug> so I tried to deploy the "mongodb-cluster" in a vagrant vm, how can I check that it's up and running ?
<Tug> I cannot ssh to the mongos unit
<Tug> vagrant@vagrant-ubuntu-precise-64:~$ juju ssh mongos/0
<Tug> ERROR unit "mongos/0" has no public address
<sarnold> have you exposed the unit?
<sarnold> s/unit/service/
<Tug> yes
<sarnold> can you connect a mongo client to the address specified?
<Tug> using the web ui first
<Tug> then with juju expose mongos
<Tug> sarnold, which address are you talking about ?
<Tug> (my machine is really unstable atm, but I can understand as the vm runs 13 units)
<davecheney> sarnold: this is different
<davecheney> open-port / juju expose is for network connections of the service
<davecheney> juju ssh, ewll, ssh's to that unit
<davecheney> but it looks like there is no public address available
<davecheney> so it cannot route to that machine
<davecheney> Tug: you could try, juju ssh $THE NUMBER OF THE MACHINE
<sarnold> which could be if using the local provider (lxc), right?
<davecheney> look in juju status
<sarnold> (at least I think I heard 'juju ssh' doesn't work with lxc..)
<davecheney> sarnold: i'm not sure qhat 'deplou in a vagrant vm' means in the context of juju
<davecheney> sarnold: nah, it works
<sarnold> davecheney: oh hooray :) what am I thinking of then? :)
<davecheney> sarnold: if there is a problem creating hte lxc container
<davecheney> then that error is common
<davecheney> because we need hte agent to start up inside the containter to report back the ip addresses it sees
<sarnold> ahh
<Tug> please wait, system is reeeeaaally sloww
<Tug> $ juju ssh 5
<Tug> Warning: Permanently added '10.0.2.15' (ECDSA) to the list of known hosts.
<Tug> Permission denied (publickey,password).
<davecheney> ok, this is a bit different
<marco-traveling> davecheney: sarnold  juju ssh machine number on local does not work
<Tug> It's my first time using juju so I don't really understand what you are saying ^^
<davecheney> marco-traveling: it does
<marco-traveling> you have to use unit
<Tug> hte agent ?
<marco-traveling> davecheney: that output suggests otherwise
<davecheney> marco-traveling: it really really does
<marco-traveling> davecheney: when was that fixed?
<Tug> yes it's on the vagrant documentation page
<davecheney> ubuntu@winton-02:~/charms/trusty$ juju ssh 1 -- bash -c 'whoami;hostname'
<davecheney> ubuntu
<davecheney> ubuntu-local-machine-1
<davecheney> Connection to 10.0.3.42 closed.
<Tug> it's supposed to be mongos/0
<davecheney> marco-traveling: not sure
<marco-traveling> Tug: what does juju status say
<davecheney> maybe not fixed in 1.16
<davecheney> it's been a long time since I used 1.16
<davecheney> absolutely fixed in 1.17/18
<Tug> sorry what ? you need my juju version ?
<Tug> 1.16.6-saucy-amd64
<marco-traveling> okay let's take a Stroo back
<sarnold> Tug: can you pastebin your juju status output? (the pastebinit program can be quite helpful for using pastebins)
<marco-traveling> step*
<Tug> yep yes wait a bit browser is laggy
<Tug> http://pastebin.com/5MfLUWs4
<Tug> so, lot of "pending", is that good ?
<marco-traveling> Tug: it means what you'd expect, things are still being setup
<Tug> ok, thx marco-traveling !
<Tug> It's been an hour now though
<marco-traveling> Tug: you may have exceeded the limits of the vagrant box
<marco-traveling> that's a lot of services
<Tug> yeah my machine's
<Tug> *or my machine's
<sarnold> oww :)
<marco-traveling> Tug: you might want to try with less units next time
<Tug> ok I'll shut it down now and try in a real cloud
<marco-traveling> or that
<Tug> thanks for your help guys
<Tug> (or girls)
<sarnold> have fun Tug :)
<Tug> I will sarnold, I will
<marcoceppi> well I'll be damned, davecheney 1.17.5 fixes juju ssh <machine>
<marcoceppi> I'll make sure the next version of the docs have that caveat removed \o/
<marcoceppi> davecheney: I just double checked all the release notes, I didn't see it mentioned :\
<davecheney> marcoceppi: emoji crying
<bloodearnest> \o/
<bloodearnest> lxc-clone: true baby!
<bloodearnest> oh boy, lxc cloning is a game changed. 10 machines in 10s
<bloodearnest> s/changed/changer
<gsamfira> Hey there folks. Quick question: Is there any way to force remove a unit in dying state?
<gsamfira> without clobbering the machine preferably :)
<gsamfira> never mind. Apparently if a machine agent gets interrupted while killing a unit and doesn't manage to report back, it can't recover. It only checks if life == params.Dead
<gsamfira> and not ==params.Dying
<Tug> I try to understand juju's best practices
<Tug> so if I get it right, the documentation say I should not write a charm to deploy my application
<Tug> It's unclear to me how I can configure the whole machine then
<Tug> for instance, at the moment I have a bash script which copies a nginx.conf which points to a specific path to serve static files and to specific ports where my node.js app is running
<Tug> plus nginx is doing extra work like handling ssl, etc
<Tug> what should I get started on to port this to juju ?
<marcoceppi> Tug: what do you mean? charms should do whatever they need to in order to set up your service
<Tug> marcoceppi, ok, then I don't understand where I can set the configuration for the nginx charm
<marcoceppi> don't use the nginx charm, just have your charm install ngimx and configure it
<Tug> marcoceppi, ok so I do need to write a charm
<marcoceppi> yes, it sounds like it. the ngomx charm is more of a microcache + loadbalancer
<Tug> I don't know, is it common to write your own charm ?
<marcoceppi> very
<Tug> or is it supposed to be for service only
<Tug> ok
<marcoceppi> apt-get is for packages, charms are for deployments
<Tug> so to get started, I can just write a charm which executes my bootstrap script
<marcoceppi> if you want to think of it like that
<Tug> I see
<marcoceppi> what bootstrap script?
<Tug> the one I'm using atm to configure a new machine, it's just a bash script
<marcoceppi> oh, the yes, basically
<marcoceppi> bleh, sorry, my swipe keyboard is just not hacking it this morning.
<zchander> ping marcoceppi
<Tug> alright, thx marcoceppi
<marcoceppi> Tug: if you have a script, you basically have like 85-90% of a charm. You might need to tweak it a bit to work with configurations so you can pass configuration variables to the charm, handle relations, etc
<marcoceppi> zchander: o/
<zchander> Good afternoon (it is here, at least ;) )
<overm1nd> for me it would be useful to have a general charm for example to add a php virtual host to apache service
<overm1nd> i'm just new to juju
<overm1nd> I will try to write my own and experiment
<zchander> May I bother you (again) with a 'noob' question? I am trying to deploy Ceph on my MaaS, with Juju. So far, I have managed to get Ceph running, with storage, but how can I make this available through e.g. NFS (or something similar)
<marcoceppi> overm1nd: that's not a bad idea, it opens some issues, like some php applications require specific packages, etc, but having a generic php container charm (like we do for tomcat) would be cool
<marcoceppi> overm1nd: we have similar examples for rails, node.js, and tomcat* I'd be happy to help answer questions if you want to try to tackle that though
<overm1nd> thx marcoceppi, I think it's a very common deployment
<marcoceppi> zchander: so, ceph charm exposes a ceph-client interface that you can communicate with from your charm
<marcoceppi> zchander: there's an example of how to communicate with ceph in your charm in this example (non-working) charm: http://manage.jujucharms.com/~james-page/quantal/ceph-client
<overm1nd> in the docs I cannot find the answer to a simple question
<marcoceppi> overm1nd: PHP is a /very/ popular deployment strategy in this day and age of the web. having something like a charm which installed php5-fpm, nginx, etc and configured it with the abiility to co-locate multiple php apps would be really nice
<marcoceppi> overm1nd: which question is that?
<overm1nd> how we can deal with migration of db from a machine to another?
<marcoceppi> overm1nd: as in, you deploy a database charm, and want to migrate the database on it to another deployment of that database charm?
<overm1nd> exactly
<marcoceppi> overm1nd: well, that varies depending on the charm
<overm1nd> suppouse a forum like discourse
<zchander> Going to have a look at that. The reason I asked this, The OwnCloud charm only accepts NFS as shared storage.
<marcoceppi> overm1nd: the postgresql charm, for example, contains information in the README about how to achieve this
<overm1nd> (by the way I really appreciated your work)
<marcoceppi> zchander: ah, adding ceph as an option would be really awesome
<overm1nd> thx marcoceppi I will dig in to this
<marcoceppi> overm1nd: oh dear, sorry you're using that charm, it needs a bit of work to be compatible with latest upstream sadly
<overm1nd> I'm planning to use it
<overm1nd> now I'm using the docker from sam
<overm1nd> but I would like to move everything on juju
<marcoceppi> overm1nd: the change isn't that drastic, if you get to it before I do, but ever since Discourse started tracking database settings for production in a different file than database.yml the charm has become broken
<marcoceppi> I haven't had time to look at it, but someone with time and knowledge of discourse could patch it pretty quickly
<overm1nd> I will wait :P
<marcoceppi> The charm's on github if you want to give it a crack, otherwise I'll try to fix it next week
<overm1nd> I still need to experiment making my own charm
<overm1nd> and understand migrations
<marcoceppi> overm1nd: cool, well if you have any questions while writing your own charm feel free to let us know!
<overm1nd> the real value for me is not to have to deploy everything again and again moving from an hosting to another
<overm1nd> thx your support is precious
<marcoceppi> overm1nd: so, doing backups up until recently was painful because we had no real way to just run commands against a charm unless we piped them through ssh, now there's a juju run command which lets us just fire arbitrary commands against the environment. So it'd be easy to just say `juju run --unit mysql/0 mysqldump my-db > /tmp/db.sql` then rsync that file to a new mysql server and restore
<themonk> marcoceppi: hi
<marcoceppi> not super sexy as far as commands go, but easier. Charms should start having their readmes updated to reflect this new ability in the near future
<marcoceppi> themonk: o/
<overm1nd> this sounds great
<themonk> marcoceppi: in subordinate charm install scripts runs after add-relation ryt?
<marcoceppi> overm1nd: there's also work on a generic backup charm. So you could just deploy this backup charm to your db service, configure it to put the backups in say s3, then in your new deployment, deplyo the backup charm in a restoration mode and it'll pull the backup from s3
<overm1nd> sounds greater
<lazyPower> themonk: not unless the add-relation script implicitly calls the instal hoook.
<marcoceppi> themonk: the subordinates install runs first. It follows the same routine a normal charm does, install -> config-changed -> start THEN relation-* hooks, but it only gets added to a service after you run juju add-relation
<overm1nd> you guys are doing really good stuff
<marcoceppi> overm1nd: I'd like to think so :D
<themonk> lazyPower: hi
<themonk> marcoceppi: hmm thanks
<lazyPower> marcoceppi: is there an env var that juju-deployer can read for local charm store path?
<marcoceppi> lazyPower: no idea
<lazyPower> ppetraki: ^
<marcoceppi> JUJU_REPOSITORY is the juju env variable I think
<marcoceppi> lazyPower: why not just make a path to the charm teh branch?
<marcoceppi> I think deployer does relative paths
<ppetraki> tried that, maybe I messed it up
<ppetraki> actually, nm, I messed up, I had branch paths defined and... found a bug in my local bundler :)
<hazmat> overm1nd, ping..
<overm1nd> hi hazmat
<overm1nd> when you have time I will appreciate your help
<hazmat> overm1nd, greetings.. got meetings for the next 3hrs :-( would you have time this afternoon?
<hazmat> er.. relatively afternoon ;-)
<hazmat> dk
<overm1nd> yes
<hazmat> i can make some time in an hr
<overm1nd> just ping me
<hazmat> cool
<overm1nd> thx you very much
<zchander> :q
 * zchander forgot, once more to select the correct window
<overm1nd> this mistakes can lead you to big troubles with people :P
<Tug> Does it matter if I user a System V init script in my charm, or there a reason I should really use upstart ?
<marcoceppi> Tug: you can use whichever makes sense for your
<marcoceppi> you*
<Tug> ok :)
<marcoceppi> Tug: there are only a few policies a charm must follow, and that's only if you want it to be in the charm store, otherwise you can pretty much do *anything*
<Tug> I see! that's why almost all charms I saw used upstart ?
<marcoceppi> Tug: well, upstart is the Ubuntu init system, so people writing charms typically target them to Ubuntu hence the upstart script. But there are charms that create init.d systemv scripts instead
<marcoceppi> it's all up to the charm authors preferences, upstart is not a required charm store policy
<marcoceppi> Tug: this is the policy, btw, https://juju.ubuntu.com/docs/authors-charm-policy.html
<Tug> ok cool, I was just checking I wasn't missing out on big features
<marcoceppi> Tug: again, that's just if you want it in the charm store, if it's just for personal use  you can do whatever you like
<Tug> alright
<Tug> thx, you're being really helpful
<Tug> something I don't get in node-app charm: the `app_user` config parameter defaults to `ubuntu` but I don't see the script trying to create the user
<Tug> I don't think `ubuntu` is a default user, so how is it going to work out of the box ?
<marcoceppi> Tug: the ubuntu user is the default user in all Ubuntu Cloud images
<Tug> ok marcoceppi
<Tug> does it have a home ?
<marcoceppi> Tug: yes, /home/ubuntu
<Tug> ok good to know :)
<hazmat> overm1nd, pong
<overm1nd> hi
<overm1nd> you asked me 3 question
<overm1nd> but How I can log to the juju bootstrap machine if the bootsrap fails?
<hazmat> overm1nd, using your ssh key
<overm1nd> yes of course
<overm1nd> i connect to the droplet via putty using my ssh-imported key
<overm1nd> and the key is present in juju/ssh also
<overm1nd> so this part is working
<hazmat> overm1nd, putty means the private key is on your desktop
<hazmat> overm1nd, is the private key on the droplet your using as a juju client?
<hazmat> overm1nd, can we do a hangout/screenshare?
<overm1nd> yes is the same key
<overm1nd> I have it on the docean panel
<overm1nd> and a file on my desktop for putty
<overm1nd> we can do if you like
<overm1nd> maybe I'm missing something really stupid
<overm1nd> in the droplet i'm using to start the boostrap machine
<overm1nd> i have a file in juju/evnirometnt/local.jenv
<overm1nd> the key is also present there
<overm1nd> you can't reproduce the problem?
<hazmat> overm1nd, the problem is you also need to have it in ~/.ssh of the machine your droplet your using on the client
<hazmat> er... the problem is you also need to have it in ~/.ssh of the machine/droplet your using as the client
<overm1nd> ehm I think I have otherwise I could not connect right?
<hazmat> overm1nd, right.. but your saying your connecting with putty from your desktop
<hazmat> which is not the same at all
<hazmat> the key has to be accessible to where the ssh client is being run
<overm1nd> guys you rock! thanks hazmat so much for the help!
<hazmat> overm1nd, np.. enjoy..  bug/feature suggestions welcome as well.
<overm1nd> of course I will spread the juju verb
<cjohnston> are there any docs for charm helpers?
<marcoceppi> cjohnston: not yet
<cjohnston> awesome
<overm1nd> guys which is the preferred way to install multiple wordpress site on the same node?
<overm1nd> I see a lot of charms for wordpress
<overm1nd> wordpress-mu is still the way to go?
<sarnold> actually thikning of wordpress, marcoceppi, is this something that could be fixed in our wordpress charms? :) http://blog.sucuri.net/2014/03/more-than-162000-wordpress-sites-used-for-distributed-denial-of-service-attack.html
<marcoceppi> overm1nd: wordpress-mu has been built in to wordpress for a while
<marcoceppi> overm1nd: the charm would need to be reworked, I've been planning to rewrite it for a while but haven't gotten around to ityet
<overm1nd> I see, this is why I was asking
<marcoceppi> sarnold: good fine, open a bug and I can have default installs disable that
<overm1nd> mmm, I have a service stuck in dying since 10 minutes
<overm1nd> is normal?
<marcoceppi> overm1nd: is the agent-state in error?
<marcoceppi> overm1nd: to answer your question, no
<overm1nd> yes
<overm1nd> first deploy ever of mysql failed
<overm1nd> lol
<marcoceppi> overm1nd: do, destroy-server requests are queued just like any other
<marcoceppi> if the service or unit is in an error state, it won't process any events
<marcoceppi> run juju resolved mysql/0
<marcoceppi> to clear the error flag and proceed to the other events
<marcoceppi> overm1nd: https://juju.ubuntu.com/docs/charms-destroy.html#caveat-dying
<overm1nd> ok
<marcoceppi> overm1nd: you may have to run resolved several times if more errors occur
<overm1nd> I did destroy-service before but it was not processing
<overm1nd> I should read more docs :P
<marcoceppi> overm1nd: in 1.17.4 and above there's a --force flag that will allow you to remove services bypassing the state of the service/unit
<overm1nd> ok thx
<overm1nd> worked
<overm1nd> I was a bit worried about it was not doing anything
<overm1nd> mmm it's still failing to start
<overm1nd> but I have to go now, I will dig in the problem tomorrow, thx for you help
<overm1nd> see you
<Tug> I'm trying to debug a charm on local
<Tug> $ juju debug-log
<Tug> Permission denied (publickey,password).
<Tug> ERROR exit status 255
<Tug> same error as in the vm
<lazyPower> Tug: local provider right? those logs are actually stored in $HOME/.juju/local/logs
<lazyPower> there's an open bug against debug-log not working properly on local deployments.
<Tug> ok thanks lazyPower, I'm going to check
<jose> hey marcoceppi! I was wondering if you would like to do another openweek session on juju charming
<lazyPower> jose: we have our charm school schedule posted on the fridge
<lazyPower> http://fridge.ubuntu.com/calendars/
<jose> lazyPower: openweek is a different classroom team event, where we have a week full of sessions on how to get involved with the community, see https://wiki.ubuntu.com/UbuntuOpenWeek :)
<lazyPower> Hmm... I'd do that.
<lazyPower> an hour long session right?
<jose> oh, really? that'd be awesome!
<jose> yep, you get to choose your slot
<lazyPower> jose: before i fully commit let me pick a small app to charm
<lazyPower> we'll live-dev a charm
<jose> sure, we can use on-air for that
<lazyPower> I'll follow up Monday?
<jose> sure, sounds good
 * lazyPower thumbs up
<jose> thank you :)
<lazyPower> jose: I'll do a demo charm for Piwik - the open analytics platform
<jose> lazyPower: sounds good! which slot would you like to grab?
<jose> open slots are the blanks here https://wiki.ubuntu.com/UbuntuOpenWeek
<lazyPower> 1800UTC on Thursday
<lazyPower> jose: Actually, lets go Tuesday. Get it out of the way early
<lazyPower> so if you want to do another other juju topics, someone can follow the charm school
<jose> ok, tuesday at 18 utc?
 * lazyPower nods
<lazyPower> sounds good to me
<jose> cool, do you have a wiki page?
<lazyPower> i do not
<jose> ok, I'll just link to LP
<jose> and it's on the schedule now, thanks a lot! :)
<Tug> sorry I don't get it: http://pastebin.com/y75UbaGS
<Tug> install: line 27: syntax error in conditional expression
<Tug> looks like a bash error
<Tug> but I can't see any
<lazyPower> Tug: and line 27 that i see is [[ -x /usr/sbin/nginx ]] || install_nginx
<Tug> yes
<Tug> same syntax as install from node-app charm
<lazyPower> hmm
<lazyPower> the syntax looks fine
<Tug> yeah really weird
<lazyPower> when you enable the xtrace, is that indeed where its choking?
<Tug> I'll try but I just realized I may have forgot to install juju-local
<Tug> is there a way to remove the failed service without destroying environment ?
<Tug> destroy service set the service to "life: dying"
<Tug> but it's not removed from the environment
<lazyPower> Tug: are you on stable or devel series of juju?
<Tug> lazyPower, xtrace you meant with "set -eux" ?
<lazyPower> yeah, adding the x flag.
<Tug> lazyPower, yes
<lazyPower> Tug: i meant, ar eyou on stable? or are you on devel?
<Tug> $ juju --version
<Tug> 1.16.6-saucy-amd64
<Tug> stable I think
<lazyPower> ok, you dont have the force flag on that version of juju
<lazyPower> :(
<Tug> I can go to devel if you want
<lazyPower> i dont recommend it
<lazyPower> theres no upgrade path for deployments made with devel
<Tug> what is the force flag going to dio ?
<lazyPower> you could force destroy the machine, then the service would remove itself
<lazyPower> there's got to be a failed hook in your env if its not clearing itself up and stuck in dying
<lazyPower> you'll have to resolve it using juju resolve service/#  until it goes away.
<lazyPower> if its a dependent service, i recommend looking at why it failed on the dependent service
<Tug> yeah, that's how I'm doing: "juju destroy-environment"
<Tug> yeah I'm actually debugging it
<Tug> and it's the bash error
<lazyPower> you cant destroy a service while you're in debug-hooks :|
<Tug> but I can't figure it out
<lazyPower> the hook execution doesn't complete until you leave that context, which is why it would be stuck in a dying state.
<Tug> I'm not using degub-hooks
<Tug> I'm on the local provider
<Tug> so it does not work
<Tug> I'm just tailing the log file
<lazyPower> ... debug-hooks work son local provider last i checked
<lazyPower> remoting into my 1.16 farm, 1 moment. I'll validate that statement
<Tug> oh sorry I was mixing with debug-log
<Tug> never tried debug-hooks actually
<lazyPower> yeah :) debug-log is bugged atm
<lazyPower> oh man its great
<lazyPower> run your hooks in interactive mode to dbug
<lazyPower> make live edits to your hooks and re-run them
<lazyPower> its actually how i write 3/4 of my charms when i'm prototyping.
<Tug> ok let's try :)
<Tug> wow!
<lazyPower> pretty neat huh?
<Tug> it logged me in the machine ?
<Tug> yeah
<lazyPower> yeah, you're in an interactive tmux session. as hooks execute, you'll see the context of the tmux session change
<lazyPower> so, to run your hooks you just call
<lazyPower> hooks/hookname
<lazyPower> (From within the hook context, that does nothing if you're not in an executing hook context)
<Tug> mmm
<Tug> How can I run it again now that it has errored ?
<lazyPower> juju resolved -r service/#
<lazyPower> the -r is shorhand for --retry
<Tug> so cool !
<Tug> yeah so I'm back to that bash error ^^
<Tug> but it sure is better than tailing logs
<lazyPower> Tug: start peeling away the layers of complexity
<Tug> yeah I think the error is misleading
<Tug> install % [[ -x /usr/sbin/nginx ]] || echo "hello"
<Tug> hello
<lazyPower> so its in that method
<lazyPower> ?
<Tug> neither
<Tug> I just copy pasted it in the shell and it worked
<lazyPower> hmm
<lazyPower> did you set eux on your shell?
<Tug> yes
<sarnold> hey Tug :)
<Tug> hi sarnold
<Tug> lazyPower, how can I resume debugging ?
<lazyPower> Tug: beg pardon?
<Tug> I want to try again
<Tug> $ juju resolved -r nirror-front/0
<Tug> ERROR cannot set resolved mode for unit "nirror-front/0": already resolved
<lazyPower> ah, well since its in the install hook
<lazyPower> and its very difficult to attach to the unit before it kicks off the install hook
<lazyPower> i would temporarilly return an exit code > 0
<lazyPower> eg: return 1 from your install hook
<lazyPower> then you can attach to it and repeat the steps
<Tug> mm, it's marked as installed now
<Tug> agent-state: installed
<lazyPower> destroy and try again :)
<Tug> ok :)
<lazyPower> ok, i'm out for now
<Tug> thanks for your help lazyPower
<lazyPower> Tug: if you get stuck and nobody's responsive in #juju over the weekend try the list.
<Tug> ok
<lazyPower> and no problem :) Happy to help
<lazyPower> feel free to ping me if you need anything
<Tug> thank you :)
<Tug> is manual provisioning not available in stable version ?
<Tug> $ juju switch manual
<Tug> ERROR "manual" is not a name of an existing defined environment
<marcoceppi> Tug: it is, but you have to edit the environments.yaml file
<marcoceppi> I think in 1.16 it's called "null" which is dumb
<marcoceppi> just change "null" to manual
<Tug> yeah I saw the "null" entry, wonder what it was for
<Tug> thx marcoceppi I'll do that
<marcoceppi> Tug: we renamed null to manual provider
<Tug> I set "bootstrap-user: root" but it's trying to connect using my current user
<Tug> found this bug https://bugs.launchpad.net/juju-core/+bug/1280432 it's probably related
<_mup_> Bug #1280432: manual provider regression on bootstrap-user <manual-provider> <juju-core:New> <https://launchpad.net/bugs/1280432>
<blackboxsw> marcoceppi, still workin on getting that review through ou team for block-storage-broker charm in shape for charmstore. thanks for the comments. we'll have something merged in next week for that to also include EC2 support and copyright files and I'll ping you on that review
<marcoceppi> blackboxsw: sweet
<blackboxsw> interesting juju destroy-service question for folks for a principal and subordinate relationship.
<blackboxsw> the subordinate provides a mounted volume to the principal, and will not unmount that volume until the principal's service is stopped
<blackboxsw> but relation-departed fires first on the subordinate during juju destroy-service principal-service
<blackboxsw> so I'm wondering if there is a way I can make the subordiante's departed hook wait or replay after the principal's
<blackboxsw> the principal's departed relation will stop the service in question
<blackboxsw> I was wondering if juju-run would give me this functionality from the subordinate (to call the principal's stop hook) but I can't run juju-run from within a hook context
<blackboxsw> just musing about how to solve the subordinate/principal hook ordering dependency
<blackboxsw> just a note: juju remove-relation principal-service subordinate-service seems to fire the principal's departed hook 1st, so this didn't cause a problem. just juju destroy-service I think
<blackboxsw> again it's friday, so I was dropping this bomb out there to see if there were any wild ideas
#juju 2014-03-22
<jose> marcoceppi: hey, just as a heads up, you may want to close this bug when/if the charm is on the store: https://bugs.launchpad.net/charms/+bug/1294232
<_mup_> Bug #1294232: Charm Needed: Magento <Juju Charms Collection:New> <https://launchpad.net/bugs/1294232>
<themonk> marcoceppi: hi
<themonk> if provider charm has only relation joined and requirer charm only relation changed will it be wrong? here provider relation joined sets variable and requirer charm gets those variables
<Tug> hi, I tried manual providing yesterday and it was working like a charm
<Tug> now if I try to connect to the agent machine agent (just doing juju status in fact)
<Tug> ERROR state/api: websocket.Dial wss://my.domain.com:17070/: dial tcp XXX.XXX.XXX.XXX:17070: connection refused
<Tug> s/manual providing/manual provisioning/
<marcoceppi> Tug: you're using 1.16?
<Tug> nop I switch to devel :s
<marcoceppi> devel should work with manual provider. what version Ubuntu is the machine?
<Tug> the machine with the client or the agent ?
<marcoceppi> agent
<Tug> ubuntu server 12.04
<marcoceppi> :/
<Tug> yeah it was an experiment...
<marcoceppi> ssh in to the machine, run sudo initctl list | grep juju
<marcoceppi> so far you are doing all the right things
<Tug> I have "juju-db stop/waiting" that's no good right ?
<marcoceppi> you should have two juju jobs
<marcoceppi> so the provisioning seems incomplete
<Tug> jujud-unit-redis-master-0 start/running, process 10004
<Tug> jujud-unit-nirror-front-0 start/running, process 10006
<Tug> jujud-unit-nirror-proxy-0 start/running, process 10005
<Tug> juju-db stop/waiting
<Tug> jujud-machine-0 start/running, process 10003
<marcoceppi> but yes, that should be running
<marcoceppi> oh, start the juju-db job then
<Tug> I may have reconfigured mongodb for my needs
<Tug> is it using mongodb ?
<marcoceppi> sudo start juju-db
<marcoceppi> yes
<Tug> :s
<marcoceppi> juju-db is mongodb
<Tug> yeah I wiped out the db to connec to a replicaset
<marcoceppi> which explains why now. if you want to run a charm on node 0 which uses mongodb, try --to lxc:0 instead
<Tug> ok but I would have to start over because the configuration is lost right ?
<marcoceppi> Tug: you might be able to save it but odds are it's best to start over
<Tug> ok, thanks for helping me again marcoceppi :)
<marcoceppi> Tug: np!
<Tug> next time I try the real thing in AWS ;)
<zchander> ping marcoceppi
<Tug> I wonder how I will update my application once deployed though. Is there an API or something to execute commands on all the units of a service ?
<Tug> I like the way deployment is done with capistrano
<Tug> I'm not a fan of polling a deployment repository
<marcoceppi> Tug: typically, you can make a version configuration option to decouple charm from application version
<marcoceppi> Tug: some charms even do something like repo, branch where repo can be a git, bzr, or hg repository and branch can be a git tag, git branch, bzr tag, etc
<Tug> ok, but if I change the config option of the service all units are updated ?
<Tug> what if I want to update half of it to keep the system online for instance ?
<lazyPower> Tug: I may be wrong, but i don't think juju supports that ootb. You woudl want ot deploy 2 clusters of the application, and update them independently behind a load balancer
<lazyPower> eg: wordpress1, wordpress2, behind haproxy
<lazyPower> set wordpress1 to perform the update, haproxy will route to wordpress2 while wordpress1 is "down" for the update.
<marcoceppi> Tug: lazyPower is correct. You're managing service groups not units. If you want to matinain diffferent service groups and do cascading upgrades, deploy your service twice with different alias (juju deploy wordpress wp1; as an example)
<Tug> ok makes sens :)
<Tug> *sense
<Tug> have to go, thx for tthe tips
#juju 2014-03-23
<designated_> I have a question reagrding juju charms.  I'm currently using juju to deploy services on baremetal with maas on precise.  Are the juju charms supposed to configure the network interfaces on the nodes or does that need to be done manually?  I've noticed various interfaces being referenced within the charms but none of them get configured when a service is deployed (interfaces like eth1 or logical interfaces like br0).
<designated_> I just want to make sure I have a good understanding of what the charm is responsible for and not become more frustrated because of unreasonable expectations.  If the charms are not responsible for configuring those interfaces, that's fine, I just want to be aware of that fact.
<rick_h_> designated_: it's a questions on the juju radar.
<rick_h_> designated_: right now most charms can open/close ports
<rick_h_> they don't know or expect having dual nics
<rick_h_> designated_: there's work and idea in progress to support networks as more of a 1st class citizen, but I don't think charms will be responsible so much. It's hard to colocate and such in some of those situations.
<designated_> rick_h_: thank you for the answer, just to clarify, if i configure interfaces manually (physical or logical), and then reference them in the charm config, those interfaces will get used correct?
<rick_h_> designated_: I think so. If you configure the baremetal and the charm has access to that then the charm can use it well enough
<rick_h_> designated_: but existing charms aren't built around the idea atm and it makes things hard to reuse in other cloud environments
<designated_> sounds like juju was designed more for cloud service deployment, like deploying on openstack for example and installing services on metal was an afterthought.  that clears things up.
<rick_h_> designated_: well, it's something thought about. I'd keep an eye on the future.
<designated_> rick_h_: thanks alot that clarifies a lot of the problems I've been having.
<hazmat> designated_, well.. not entirely.. re juju   on baremetal.. some additional networking support is landing now (targeting vlans which get configured exposed in maas).. and its used quite often for deploying clouds on baremetal. but  yeah atm, juju isn't going to do anything special wrt to nic bonding, nic setup.
<hazmat> designated_, out of curiosity what sort of use cases do you have or would want to see here, just setting up vlan nics? and targetting workloads to vlans?
<designated_> eventually, link aggregation(bonding), vlan tagging, right now I'm just trying to use multiple physical interfaces and the juju charm didn't seem to be doing any network interface configuration, physical or logical.
<hazmat> designated_, and wrt to net conf needed, just enumerate devs and dhcp on them?
<hazmat> designated_, the notion is that some of this should be in maas as its the provider..
<hazmat> the degree of 1st class networking support in juju is more along sdn lines, configuring vlans, routes, etc.
<designated_> To resolve the NIC mapping inconsistencies, i spent some time yesterday writing a shell script to go through and map pci bus (in ascending order) to eth name for consistency, so that when we reference interface names in a charm it will consistently be the same across all physical nodes.  This then gets downloaded in the preseed and executed before the node reboots.
<hazmat> designated_, interesting
<designated_> had i known that all network interface needed to be configured before the charm gets ahold of it, I would have spent more time on that as well.  looks like it's back to the preseed to see if I can get maas to configure all physical network interfaces and possibly create all of the logical interfaces.
<hazmat> designated_, the maas team would probably be interested in hearing about that or seeing if the script if your willing, i'm not entirely clear where they are with that and using the preseed/enlistment image for config
<hazmat> fwiw maas mailing list.. https://lists.launchpad.net/maas-devel/
<hazmat> designated_, re the  advanced networking blueprint for maas https://blueprints.launchpad.net/maas/+spec/t-cloud-maas-advanced-networking
<designated_> the logic is pretty simple, get a list of pci bus numbers, iterate through each one in ascending order and replace the name in the file udev creates, /etc/udev/rules.d/70-persistent-net.rules.  the script then gets downloaded in the preseed and executed before rebooting.
<designated_> if you want it, I thre it on pastebin http://pastebin.com/WfTdPjtw
<designated_> s/thre/threw
<designated_> don't judge my hack of a bash script, I'm not a programmer by any means.
<hazmat> designated_, thanks
<hazmat> designated_, if you want it in maas, i'd recommend filing a bug against launchpad.net/maas
<designated_> hazmat: I would have thought this would be a more common problem.  couldn't find anything on the internet about how to deal with inconsistent mac to name or pci bus to name mapping so I wrote that to deal with it.  there might be a better way or it could be addressed elsewhere but I couldn't find anything.
<zchander> Hi all, I have written some additional hooks for the owncloud charm, to support Ceph (like the MySQL charm does), but how can I deploy this to my environment?
<zchander> Charm is in ~/owncloud_xjm/owncloud and juju deploy local:owncloud_xjm/owncloud gives me ERROR cannot get latest charm revision: charm not found in "/home/madmin/owncloud_xjm/owncloud": local:precise/owncloud
<lazyPower> zchander: juju deploy --repsoitory=$HOME/charms local:owncloud
<lazyPower> zchander: ah wait, what i wrote wont work. Your charm repository needs to resemble the correct structure
<lazyPower> <somepath>/precise/<charmname>
<lazyPower> the series needs to be in that path otherwise it wont know what series your charm is targeting.
<zchander> Ah, seems logical (now I know this, the error seems logical ;) )
<lazyPower> :)
<jose> lazyPower: congratulations on being accepted as a charmer!
<lazyPower> Thanks jose :)
<zchander> And deployed... Now making the relation
<lazyPower> zchander: I encourage you to pick up amulet testing with your charm modifications and extend the existing tests on the charm
<lazyPower> https://code.launchpad.net/~lazypower/charms/precise/owncloud/tests
<zchander> I'll try that later on..... I have created one new (python) script with two aliases. but it seems like this new file (incl aliases) aren't deployed ??
<lazyPower> the file you created isnt there? or the dependencies?
<zchander> The file I created isn't in <charm>/hooks
<zchander> (it seems)
 * zchander is scratching his (already) bald head
<lazyPower> zchander: show me your deploy command. Lets verify it didnt deploy from the charmstore
<zchander> juju deploy --repository=$HOME/owncloud_xjm local:owncloud --to=8
<lazyPower> hmm
<lazyPower> seems correct
<zchander> Complete path -> /home/madmin/owncloud_xjm/precise/owncloud
 * lazyPower ponders
<zchander> Is uploads the original file(s), also downloads owncloud-6.0.2 (latest version), untars it
<lazyPower> perhaps you had an older copy of the charm cached. try doing an upgrade charm
<lazyPower> see if it shows up
<zchander> I did have the original charm installed on this machine, but I had it destroyed
<lazyPower> (note, if your deployment is in an error state, it will wait until all teh hooks have executed in the specified relationship sequence before it does the actual upgrade, and will then kick off upgrade-charm hook)
<zchander> Can I also erase the cached version?
<lazyPower> not without some voodoo
<lazyPower> and i sadly do not posess that vooodoo
<zchander> hmmmmmmm, normally I wouldn't care to use some voodoo, but this so new to me, and when I bork a machine I won't be able to fire it up again, until tomorrow morning....
<lazyPower> lets not traverse that path then
<lazyPower> just upgrade the charm :)
<zchander> OK,I'll first install the original owncloud
<lazyPower> you shouldn't need to do that
<lazyPower> deploy from your local charm, and since its presently deployed, issue upgrade-charm
<lazyPower> if there are any changes it will go ahead and upgrade it. Otherwise it will return that its already at the latest version
<lazyPower> and refuse to do anything, therefore theres something else amiss
<zchander> OK, got the new relation files in place....
<zchander> Running juju debug-hooks for now
<zchander> First bugs found...... ;) Back to the drawing board? Not yet....
<zchander> I'll continue tomorrow. I am going to relax for now and play some BF4.
<zchander> lazyPower: Thanks for your help. I might get back with you tomorrow ;)
<lazyPower> zchander: np. Let me know when you're ready to dive into amulet. Extending the tests to support the CEPH option will be fairly trivial
<zchander> I'll do!
<zchander> Good night! (It's 22:57 GMT+1, here)
<davecheney> ahoy!
<rick_h_> matey!
<marcoceppi> o/
<davecheney> where's me booty ?
<marcoceppi> where ye left it
<davecheney> sage
<rick_h_> in his trunks!
<davecheney> marcoceppi: can i nag you about trusty charms
<davecheney> or should i stfu
<davecheney> your silence speaks volumes
<hazmat> marcoceppi, looks pretty clear.. markdown wins
#juju 2015-03-16
<Muntaner> good morning guys o/
<Muntaner> I'm having a little issue with security groups
<Muntaner> when I delete the charm that I'm developing, the relative security group in nova isn't destroyed
<Muntaner> how can I do this?
<Muntaner> hello guys
<Muntaner> how can I tell to a juju charm which image should it run in his VM when the service is deployed?
<lazyPower> Muntaner: that is denoted by the series of the charm
<lazyPower> Muntaner: eg: juju deploy trusty/mysql - will tell juju to allocate a trusty VM, and then deploy the charm on top of trusty.
<Muntaner> lazyPower, thanks
<jrwren> Muntaner: juju has a "default-series" option which you can set in environments.yaml so that if you don't specify a series when deploying, and there are charms in both, it will default to this settings.
<R1ck> it states here: https://news.ycombinator.com/item?id=5738252 "I tested juju a few months ago and found it to be buggy and unreliable." this was however from almost 2 years ago. does anybody agree with that statement still?
<AskUbuntu_> Adding network to machine deployed by Juju | http://askubuntu.com/q/597514
<beuno> R1ck, not at all, we use it for plenty of production services in Canonical, as do many customers
<bdx> Hello, I am wondering if anyone can give any insight as to what charm or service parameters determine what interface compute nodes talk to storage on
<bdx> ??
<jcastro> marcoceppi, around?
<marcoceppi> yes
<jcastro> so I have a card that's l ike
<jcastro> "provide examples to what a good charm looks like"
<jcastro> and I'm going through the links in the docs
<jcastro> other than ones that should link to new charms, like say, services framework, is there anything in these examples ones like the vanilla forum ones that need to be fixed?
<marcoceppi> probably
<jcastro> evilnickveitch, heya
<evilnickveitch> jcastro, hi
<jcastro> the review queue link is 404
<jcastro> so I fixed them and pushed but I think you merged before
<jcastro> http://review.juju.solutions instead of manage.blah
<evilnickveitch> okay, will take a look
<bdx> marcoceppi, jcastro: How can I configure nova-compute to use ~(os-admin-network) for storage traffic?
<jcastro> I am unfamiliar with the nova-compute charm
<jcastro> we have a list for those iirc?
<evilnickveitch> jcastro, done
<bdx> Ohhh really?
<jcastro> thanks
<jcastro> jamespage, ^^^
<jcastro> evilnickveitch, we should make it so like if a 404 is detected when we build the docs or something it yells at us
<jamespage> bdx, hello
<jamespage> nova-compute -> ceph?
<bdx> jamespage: Hi, hows it going??
<bdx> yes!
<jamespage> so you're using the public and admin network configuration in the ceph charms right?
<evilnickveitch> jcastro, yes, well we had that lint tool before, but that was before we switched to markdown.
<bdx> jamespage: Correct
<evilnickveitch> it is on my list of things to add to the new universal build tool
<jamespage> bdx, ok so the nova-compute nodes will access ceph over the public network IP's  - so if you make the ceph public-network == os-admin-network that should work
<evilnickveitch> jcastro, but to be honest, it may take a while until that gets done, looking at all the stuff I have to do
<jamespage> this assumes that both ceph and nova-compute are both physically or logically attached to the same networks
<bdx> jamespage: Thats what I currently have....
<evilnickveitch> jcastro, however, we can probably cobble together a script to do it
<jamespage> bdx, what are you seeing?
<bdx> Thats the problem....my admin network is 1G....I get bottleneck on the 1G interface
<jcastro> evilnickveitch, ok I'll mention it at the sprint, see if someone is willing to have a go
<jamespage> bdx, do your compute and ceph nodes have 10G's or alternative 1G's that can be used?
<bdx> Yes, I have 2x 1G and 2x 10G on each node
<jamespage> bdx, ok - so in that config I'd probably bond the 2 x 1G's and run control plan traffic over that network
<bdx> jamespage: Here is what my 1G os-admin-network interface looks like on my compute node
<bdx> https://www.dropbox.com/s/ws3g577yjzq6v0v/Screenshot%202015-03-12%2011.55.35.png?dl=0
<jamespage> and do the same for the 10G's and run os-data-network and ceph-public-network over that
<bdx> jamespage: Here is my 10G os-data-network interface on compute node
<bdx> https://www.dropbox.com/s/vqt3z5dauyiewjj/Screenshot%202015-03-12%2011.55.14.png?dl=0
<bdx> jamespage: I now realize that os-data-network doesn't need to be 10G
<jamespage> bdx, well it might depending on how busy your tenants get
<jamespage> bdx, are you using the ceph nova backend for instance storage?
<R1ck> beuno: well yes but seeing as its Canonical thats developing it, you should say that.. I'm looking for independant opinions ;)
<bdx> jamespage: Yes
<jamespage> bdx, right - so that is going to get pretty busy with all the io
<jamespage> you def want that running over the 10G
<bdx> Totally, but that means I need a 10G switch for os-admin
<jamespage> bdx - so you need to configure the ceph-public-network with the network CIDR for the 10G nics you have
<jamespage> bdx: ceph-public-network does not have to be the same as os-admin-network
<jamespage> bdx: the compute units just need to have a network connection to ceph-public-network - preferably over the 10G links :-)
<bdx> jamespage: now we are getting somewhere
<jamespage> bdx, the network support across the charms is endpoint driven - the services when related will say 'connect to me over XXX' - ceph public network for ceph
<jamespage> clients will just use the most direct link they have
<bdx> jamespage: I understand that....but how does compute know what interface to talk to ceph-public-network?
<jamespage> bdx, by the magic that is linux network routing
<jamespage> bdx, linux will just make the best choice - 1) the interface attached to the network 2) an explicit route via a gateway 3) the default route
<jamespage> bdx, netstat -rn will tell you which of those will happen
<jamespage> 1) or 3) are most likely
<jamespage> don't ever do storage traffic via a router - the latency will suck
<jamespage> bdx, does that make sense?
<bdx> jamespage: Ok, so I create ceph-public-network: 10.50.0.0 (10G), ceph-cluster-network: 10.60.0.0 (10G), os-admin-network: 10.70.0.0 (1G), os-data-network: 10.80.0.0 (1G), os-internal-network: 10.90.0.0 (1G), os-public-network: 10.100.0.0 (1G)
<bdx> nova-compute only has params for os-data-network
<jamespage> yup
<jamespage> nova compute does not have any endpoints - it just consumes them
<bdx> So how does compute know to talk to 10.50.0.0 for storage trafficH
<jamespage> bdx, because it must have a 10.50.0.0 network connection
<jamespage> bdx, note the charms do not setup and configure network interfaces
<beuno> R1ck, I understand, I was just commenting on the stability, given that we run our most critical services on it (SSO, the software store, payments, etc)
<jamespage> bdx, they just detect and consume what's already there
<beuno> you would know fairly quickly if it wasn't stable   ;)
<jamespage> bdx, MAAS + Juju are developing features to support network interface configuration (discover is already supported)
<bdx> jamespage: Ahh, ok....so nova-compute will know to talk to ceph-public-network for storage traffic even if I do not specify 10.50.0.0 anywhere?
<bdx> ok
<R1ck> beuno: awesome :)
<jamespage> bdx, yup - cause the ceph charm will pass it some 10.50 addresses - these get configured into /etc/ceph/ceph.conf and used that way
<bdx> jamespage: Phewwww, this is great news!
<bdx> jamespage: Thank you for taking the time to explain that.....it has been driving me crazy.
<jamespage> bdx, hey - its a little complex right now as neither maas or juju exposes networking in a consumable way by end-users or charms - that is coming - but the openstack charms jumped the gun on this due to requirement todo what you're doing
<jamespage> bdx, you can use a special charm to config up your boxes first - I've seen people use the 'ubuntu' charm with some extra scripts called from config-changed hook to configure the network
<jamespage> that's a stop-gap until everything hooks up between MAAS/Juju/Charms
<bdx> jamespage: Ahhh totally, thats a great idea.
<bdx> jamespage: I have a feeling what you advised is going to be exactly the fix I am looking for...for the time being.
<jamespage> bdx, once everything is up and networked, you can then use the "--to" syntax to target services are particular machines
<jamespage> bdx, erm so you will have to re-deploy your ceph cluster - its not possible to switch the public network post deployment
<jamespage> due to the way inter-mon communication works
<bdx> jamespage: Entirely.
<jamespage> bdx, now that would be a neat trick but I feel I could waste alot of midnight hours trying to make that work
<jamespage> bdx, are you using lxc containers for any of the services? that's particularly tricky with the network split support right now
<bdx> jamespage: Totally....I am using the openstack-installer as our means of deployment here as DarkHorseComics
<jamespage> bdx, ok so the lxc containers juju creates will only get networked to eth0 via a bridge
<bdx> I am using nucs in my testlab for supporting services that aren't compute, quantum-network, and storage(ceph)
<jamespage> so that does limit what you can do
<jamespage> bdx, you can if you are feeling brave create the lxc containers with the right bridges/networking and then manually introduce them to your environment - but its a bit fiddly
<jamespage> bdx, I've also seen people use KVM machines networked up and then registered into MAAS for deployment - the power control is still manual (maas has some rudimentary virsh support - but its not for remote machines - just tesing).
<bdx> jamespage: Totally......I just need to finish defining our deployment methodology....getting storage traffic off the os-admin-network/interface is one of my last issues to resolve.
<jamespage> bdx, awesome - hope this conversation unblocks you
<bdx> jamespage: I'm pretty sure you can use the "virsh" power type in maas
<jamespage> quite likely
<bdx> Thats what I use in my kvm labs...
<bdx> Thanks again for your support
<jamespage> bdx, btw which type of tenant networks are you going to use?
<jamespage> one of the overlay network types? (gre/vxlan)
<bdx> gre
<bdx> jamespage: Yea, gre...why?
<jamespage> bdx, oh wait - your using os-data-network - that helps
<jamespage> bdx, packet fragmentation can be awkward - make sure you configure the DHCP server for that network (or your static network config) to use a MTU higher that 1500 - preferably 9000
<jamespage> bdx, GRE carries some overhead - using a higher mtu ensures that you don't get packet fragmentation which can impact performance and cause network issues with nofrag packets
<jamespage> bdx; the ceph network would also benefit from that
<bdx> jamespage: Totally, I was thinking about opening up all interface to mtu 9000
<bdx> jamespage: Do you see any issue with that?
<jamespage> bdx, that's a good idea
<jamespage> "       option interface-mtu 9000;"
<jamespage> does the trick in isc-dhcp-server
<jamespage> you can edit the template for that in MAAS (on the assumption you are using MAAS for DHCP)
<bdx> Entirely, I am
<bdx> jamespage: I have edited my curtin_userdata to bring up my extra interfaces...do you think this is a reasonable way of doing this?
<bdx> jamespage: http://paste.ubuntu.com/10611047/
<jamespage> bdx, absolutely
 * jamespage looks at the details
<jamespage> bdx, hows that working for you?
<bdx> jamespage: Excellent!
<jamespage> bdx, maas curtin preseeds are not my strong point
<jamespage> bdx, you could use /etc/network/interfaces.d to fragment the config a bit - but that's my only comment
<bdx> I couldn't figure out how else to bring up my extra interfaces.....that was the only thing other than making a puppet class for them
<jamespage> eth1.cfg eth2.fg etc...
<jamespage> bdx, this is where I've seen folk use a special charm to configure the networks up
<bdx> jamespage: Ok, I'll keep that in mind.
<jamespage> bdx, well good luck - I'm EOD
<jamespage> ttfn
<bdx> jamespage: Ok, thanks again!!
<jcastro> hey rick_h_
<jcastro> http://readme.io/
<marcoceppi> dear hatch, THANK YOU https://github.com/juju/juju-gui/pull/707
<hatch> marcoceppi: :D
<hatch> marcoceppi: it hasn't yet been QA'd by third parties so don't thank me YET ;)
#juju 2015-03-17
<zyga> hey, is there a base vivid charm for running some bleeding edge version of vivid?
<zyga> or if not, is there a way one can easily make one
<zyga> thanks!
<zyga> it's just for testing that some of the software we're building runs on daily vivid
<Muntaner> good morning o/
<Muntaner> guys, anyone can give me some help with the nfs charm?
<Muntaner> simply, I have my charm, and I'm configuring load balancing on it... I figured that I need to "share" many folders on my charm, and wanted to use the NFS charm to accomplish that
<Muntaner> with "share" I mean all the units of the charm should have the same file system
<Muntaner> does the NFS charm work as a NFS server?
<marcoceppi> zyga: there's no real difference except that you can't use upstart, you have to use systemd for init jobs. Otherwise just create a vivid directory in your local charm repository and start creating
<marcoceppi> Muntaner: the nfs charm does work as an nfs server
<zyga> marcoceppi: hmm, I'm not that familar with juju, could you give me an example?
<gnuoy> jamespage, do you have a second for https://code.launchpad.net/~gnuoy/charm-helpers/nrpe-proxy/+merge/253166 ?
<marcoceppi> zyga: depends on what you're trying to test, basically do this: http://paste.ubuntu.com/10614347/
<marcoceppi> doh, forgot to include how to deploy
<marcoceppi> zyga: http://paste.ubuntu.com/10614351/
<zyga> marcoceppi: looking
<zyga> marcoceppi: we want mostly-vanilla vivid to run our tests on top
<zyga> marcoceppi: so just using juju as a tool to get to instances and discard them later
<zyga> marcoceppi: thanks, I'll give that a try :-)
<marcoceppi> zyga: `juju charm get ubuntu` will get you the Ubuntu charm which does absolutely nothing (but get you an instance of the series you deployed it under)
<marcoceppi> you'll need to make sure you have charm-tools installed from ppa:juju/stable for that to work though
<Muntaner> marcoceppi, I'm following your wordpress charm, but now my "mount" command just hangs
<marcoceppi> Muntaner: that's odd, do you have any logs?
<marcoceppi> it's rather late my time currently, I'm about to log off for the evening, but that's where I'd start
<Muntaner> marcoceppi, nothing more than the juju logs
<marcoceppi> Muntaner: syslog and dmesg would be helpful for mount issues
<marcoceppi> Muntaner: are you testing on the local provider?
<Muntaner> marcoceppi, no, it isn't local
<marcoceppi> Muntaner: hum, if you post logs for syslog and dmesg I can take a look later on today when I get back online
<Muntaner> marcoceppi, thanks, I'll do it asap
<Muntaner> marcoceppi, can't find anything special in the logs, can paste them now if you want
<Muntaner> guys, anyone of you has used nfs charm?
<Muntaner> mount hangs and can't understand why
<lazyPower> Muntaner: I haven't recently - which charms are you using it with?
<Muntaner> lazyPower, my personal charm. Now I discovered that probably the nfs charm in the store isn't working - I can see in it that there are no nfs daemons running
<Muntaner> lazyPower, by creating my "exports" manually, seems like I'm "adjusting" it
<Muntaner> but can't figure why it isn't working
<lazyPower> Muntaner: are you running the NFS charm on the local provider?
<Muntaner> lazyPower, no, I got it by the store: my personal charm is ran in local
<lazyPower> Muntaner: i mean using the local provider to deploy the nfs charm
<lazyPower> which it sounds liek you are - and that will not work. the Militant App Armor profiles assigned to lXC containers will prevent it
<lazyPower> Muntaner: if you look @ the readme for NFS - it tells you how to work around this
<lazyPower> https://jujucharms.com/nfs/trusty/0 - under the heading "On the LXC host"
<Muntaner> lazyPower, no: I deployed it from the store. My cloud is a private openstack installed in a all-in-one server
<Muntaner> not using lxc
<lazyPower> ah ok, i misunderstood
<Muntaner> np :)
<lazyPower> let me bootstrap AWS and kick off a NFS deployment, 1 sec
<Muntaner> thanks lazyPower
<Muntaner> lazyPower, I figured that the NFS charm hasn't hooks like the "config-changed" or other... I'm a bit confused
<lazyPower> Muntaner: its not a very well formed charm - missing the config-changed hook. It's got immutable configuration in its current form
<lazyPower> Muntaner: can you file a bug against the charm here https://bugs.launchpad.net/charms/+source/nfs  and attach any relevant logs?
<Muntaner> lazyPower, and how does it work with wordpress? lol
<Muntaner> btw ok lazyPower, will do it asap :)
<lazyPower> Muntaner: it just came online for me - starting to probe the env to see whats running
<Muntaner> ok lazyPower fine
<Muntaner> when I deployed it, I got a machine with no daemons running
<lazyPower> did you add the nfs relation to the charm?
<lazyPower> looking at the hooks - it doesn't do anything until the nfs-relation-changed hook is fired, zero exports = no daemon can start.
<Muntaner> yes lazyPower, I did it
<Muntaner> but nothing gets wrote in exports...
<Muntaner> and so, no daemon is running
<lazyPower> wp & mysql is coming up, i'll have a result shortly
<Muntaner> lazyPower, I never tested it with WP, to be honest: I always did tests with my personal charm
<lazyPower> Muntaner: confirmed - missing exports here as well.
<lazyPower> ping back with the bug number and i'll triage it accordingly
<Muntaner> lazyPower, so it basically never worked?
<lazyPower> it did at one time or it wouldn't be in the store, unfortunately its got a bug filed against it as an unmaintained charm - which means its exhibited problems in the past and was at risk of removal
<lazyPower> this is only further compounding that problem, that its utterly broken with nobody actively maintaining the charm
<AskUbuntu_> Difference between deploying bundle with quickstart or juju-deployer? | http://askubuntu.com/q/597826
<Muntaner> lazyPower, I'm writing the bug
<Muntaner> lazyPower, https://bugs.launchpad.net/charms/+source/nfs/+bug/1433036
<mup> Bug #1433036: NFS charm does not create "exports", daemons not running <nfs (Juju Charms Collection):New> <https://launchpad.net/bugs/1433036>
<mattyw> lazyPower, are you actually awake and working, isn't it early for you?
<mattyw> lazyPower, ^^ I have a quick question if you have 1 minute
<lazyPower> mattyw: yup
 * lazyPower reads backscroll
<lazyPower> mattyw: i dont see anything in the backscroll - fire when ready
<Muntaner> hey ho o/
<Muntaner> hey guys
<bdx> jamespage: You around?
<Muntaner> how do I create a bundle? the official page seems to be dead
<Muntaner> -> https://jujucharms.com/docs/charms-bundles/
<rick_h_> Muntaner: apologies there, bug that we've got a fix landed and working on a release today: http://qa.storefront.theblues.io:6543/docs/1.20/charms-bundles is the page on our QA site atm
<rick_h_> Muntaner: pulled from the juju docs at https://github.com/juju/docs/blob/master/src/en/charms-bundles.md
<Muntaner> rick_h_, thanks!
<rick_h_> Muntaner: np, let me know if you have any questions
<Muntaner> rick_h_, where should I put the local charms in order to be deployed in a bundle?
<rick_h_> Muntaner: bah caught me. Makyo do you know much of the deployer/local repo setup? I know you can define a local repo do you need the whole series subdirs?
<Makyo> rick_h_, Muntaner a quick glance shows that yes, you do need series subdirs, but I don't know too much beyond that, sorry.  I haven't had to touch the charm stuff yet.
<Muntaner> Makyo, no problem, it isn't urgent.
<Muntaner> Makyo, so I can't deploy a bundle with charms on my filesystem, basically?
<rick_h_> Muntaner: yes, with the deployer
<Muntaner> rick_h_, my bundle has mysql and haproxy took from the official store, and two local charms. As far as I can see, the bundle fails since it can't find my local charms
<Muntaner> dunno what to do exactly
<Makyo> Muntaner, AFAICS, you can, with a charms/<series>/<charm> directory structure and the bundles in charms dir.  The deployer makes a few assumptions around there.
<Makyo> Muntaner, eg: deployer doesn't obey --repository flags, so it's PWD sensitive
<Muntaner> Makyo, I'm trying, but get a "An error occurred while deploying the bundle: ('No charm metadata @ %s', 'trusty/...blabla"
<Muntaner> Makyo, simply trying by drag and dropping the bundle file into the juju-gui, not CLI
<Makyo> Muntaner, oh, hm, yeah I don't know that that's supported yet, unfortunately.
<tvansteenburgh> you can use the format "charm: local:precise/memcached" in your bundles, just make sure JUJU_REPOSITORY is set
<tvansteenburgh> juju-deployer will handle that, anyway
<rick_h_> Muntaner: the gui doesn't support local charms since it can't upload them to juju for you
<rick_h_> Muntaner: so you have to use the deployer which uses local juju commands to get the charm deployed
<rick_h_> Muntaner: ^ as tvansteenburgh says
<rick_h_> Muntaner: the GUI only supports charms that are in the charmstore as it can then pull them down from the public store api
<Muntaner> rick_h_, I never used the deployer, I always deployed local charms via juju deploy
<rick_h_> Muntaner: yea, time to use the deployer if you want a bundle with local charms from your hard drive
<Muntaner> rick_h_, will try immediately :)
<Muntaner> rick_h_, I'm getting a "deployer.cli: Config files must be specified"
<rick_h_> Muntaner: you need to pass -c bundles.yaml I believe
<rick_h_> Muntaner: that points to your bundle file
<Muntaner> rick_h_, great! seems to work \o/
<rick_h_> Muntaner: woot!
<apuimedo> rick_h_: do you know who owns the neutron-api charm?
<rick_h_> apuimedo: looking at https://jujucharms.com/neutron-api/trusty/8 it's openstack-charmers with commit history coming from liam and james mostly (right column 10 revisions)
<marcoceppi> apuimedo: the openstack-chamers own it, it's a team of people maintaining the openstack charms
<apuimedo> rick_h_: marcoceppi: I wanted to ask about neutron-api support for midonet. I was talking about it a bit this morning with lazyPower as well.
<apuimedo> as I told lazyPower, for it to be clean/ideal it should be patches to neutron-api that enabled it to add a relation to midonet-api with interface midonet (it would get the IP of that service from that)
<apuimedo> the second less ideal way, would be to do like the nsx and add config options and make the deployers check the IP of the juju deployed midonet-api
<apuimedo> and set it to neutron-api
<rick_h_> apuimedo: so you're looking to submit some updates to http://bazaar.launchpad.net/~openstack-charmers/charms/trusty/neutron-api/trunk/files ?
<apuimedo> rick_h_: looking into the best fit for doing so
<rick_h_> marcoceppi: email the juju list about the charms and possible ways of updating perhaps?
<apuimedo> lazyPower also mentioned about having a subordinate charm that provided extra templates, but I'm not entirely sure how one would go about that
<rick_h_> apuimedo: gotcha, so you're looking to chat out the best way to look into adding support. I'd probably suggest emailing the juju list and bring up the thoughts and let folks do some suggesting.
<rick_h_> apuimedo: and the folks that run that charm will hopefully chime in there and you'll get some good feedback
<apuimedo> rick_h_: ok. I generally prefer the short loop of IRC, but if the folks are not around, better send it to the mailing list.
<rick_h_> apuimedo: yea, async sometimes is best
<apuimedo> indeed
<cucumber> if a charm is in error state and you upgrade it, it will try to do it? or first has to be changed to "OK" it's state?
<marcoceppi> apuimedo: if you want to submit changes to openstack charms, it's done a certain way
<marcoceppi> apuimedo: https://wiki.ubuntu.com/ServerTeam/OpenStackCharms
<apuimedo> thanks
<marcoceppi> apuimedo: you want to target the /next branch for whatever service for your merges
<apuimedo> ok, that means that any change would not be ready until kilo
<marcoceppi> apuimedo: unless you need it to be merged ahead of time
<marcoceppi> apuimedo: you can signal in the merge that it should also land in /trunk
<marcoceppi> it needs to be in next, but changes can be ported to current /trunk
<apuimedo> but they would be merged to next in any case, or do you do backports?
<apuimedo> aha, I see
<marcoceppi> apuimedo: exactly
<marcoceppi> jamespage: it'd be great if that link was in the README for the charms under # Contributing as there's been a lot of questions around that wiki page
<apuimedo> marcoceppi: how are the charmhelpers managed? I see that the charmhelpers installed with pip differ from those embedded in neutron-api/next
<apuimedo> so if I want to add to charmhelpers/contrib/openstack, where would be the right place to go?
<apuimedo> marcoceppi: nevermind, found it. with the lp:charm-helpers and then make-sync
<jcastro> hey asanjar or kwmonroe
<jcastro> do we have an example of a charm that let's you swap out OpenJDK/Oracle Java?
<jcastro> I'm putting together a list of examples.
<jcastro> something with a config option perhaps?
<cucumber> I have mysql/0 charm as "hook failed: "config-changed"", everything seems to be ok (the problem was a communication problem with the bootstrap), so I ran the "juju resolved --retry mysql/0" command, and then, the unit download some stuff from ubuntu.com and when it ended, gives this error: " No such file or directory: '/var/lib/mysql/mysql.passwd'"
<cucumber> that file doesn't exist and I don't know what to do.. any suggestions?
<kwmonroe> jcastro: cory_fu pulled the java logic into a java_installer script that does the switch based on arch: http://bazaar.launchpad.net/~bigdata-dev/bigdata-data/trunk/view/head:/common/noarch/java-installer.sh
<jcastro> kwmonroe, ok so like this isn't in charm helpers or anything?
<cory_fu> jcastro: We went for a more pluggable implementation.  So, the charm loads the script that kwmonroe linked using jujuresources and calls the script to actually install whatever Java it decides to.  This allows a vendor to change how Java is installed just by changing a single line in the resources.yaml
<jcastro> perfect, thanks!
<cory_fu> jcastro: Previously, we had special logic to install openjdk or use a resources.yaml entry to download a binary installer.  Which was used depended on the cpu_arch
<cory_fu> jcastro: With the new approach, the installer script(s) could be bundled into the charm under a resources/ dir, and the resources.yaml entries could choose the correct one with a file: URL
<jcastro> cucumber, I wonder if touching that file and rerunning it will help?
<kwmonroe> yeah jcastro, this is how we used to do it in charmhelpers if interested (line 149-168): http://bazaar.launchpad.net/~bigdata-dev/charm-helpers/framework/view/387/charmhelpers/contrib/bigdata/handlers/apache.py#L149
<jcastro> this is great, any other reusable chunks while I am writing this up?
<cory_fu> jcastro: kwmonroe's last link is the old way in charmhelpers.  If you want the new way, we can show you that code (which is also in the bigdata charmhelpers branch)
<jcastro> when do you suppose this will land in proper charm helpers?
<kwmonroe> jcastro: we're working on a DistConfig reusable module.  it lets you store immutable config in a dist.yaml: http://bazaar.launchpad.net/~bigdata-dev/charm-helpers/distconfig/revision/386
<jcastro> ack
<kwmonroe> that's stuff you may want to configure (like users and groups), but don't want to change post deployment
<cory_fu> It would be nice to get it cleaned up and ready during the sprint next week.  The main difficulty is that we're still iterating on the big data charms to get them ready for the store, so there's a lot of churn
<jcastro> right, for now I just want a link to point to people who might be interested
<jcastro> out of curiosity did anyone show this to the ~ibm-charmers?
<cory_fu> Not that I'm aware of
<cory_fu> CH code using the installer script: http://bazaar.launchpad.net/~bigdata-dev/charm-helpers/framework/view/head:/charmhelpers/contrib/bigdata/handlers/apache.py#L159
<kwmonroe> jcastro: i showed basic uname switching logic at the joint hangout a couple months back.  iirc, i showed the cassandra special config for ppc64le: https://github.com/juju-solutions/cassandra/blob/master/hooks/cassandra-common#L365
<kwmonroe> but i haven't shown them jdk switching
<jcastro> kwmonroe, can you take a note to post the JDK switching to the list at some point?
<jcastro> it might be useful to arm people as well
<kwmonroe> noted jcastro
<AskUbuntu_> Landscape openstack juju management | http://askubuntu.com/q/597979
<bdx> Does anyone on here know anything about ceph charms?
<thumper> bdx: don't know about others, but I don't, sorry
<thumper> better some response is better than tumbleweeds
<blr> bdx: we're using storage-ceph with the block-storage-broker
<blr> bdx: https://jujucharms.com/u/tribaal/storage-ceph/trusty/6
<arosales> bdx, some of the ceph devs and maintainers are in the UK time zone so you may want to mail the list to get some coverage there
<arosales> bdx, where list == https://lists.ubuntu.com/mailman/listinfo/juju
<bdx> arosales, thumper, blr: Thnaks
<arosales> bdx, np, looking forward to seeing what your question about ceph is on the list :-)
<bdx> arosales: My issue now stems from maas.....do you know if there a was to have maas provision(commission) a node whos only connected interface is not eth0?
<bdx> It seems if there is not eth0 is connected the node fails commissioning and/or booting
<bdx> I'v posted about this in #maas .... bit of a rough spot for me right now ;'(
 * arosales was just going to suggest #maas .   . . 
<bdx> arosales: Do you see my question in #maas?
<bdx> I think this might be one for the mailing lists
<arosales> bdx currently are you manually commissioning via mac?
 * arosales just joined #maas so I missed the backscroll
<bdx> arosales: No. I have all nodes pxe and ipmi managed by maas
<arosales> I ~think~ you could manually bring this in, but I am sure your are looking for programmatic way to do this, in case you have a few of these type of machines
<bdx> Entirely
<arosales> ya
<arosales> and it pxe boots ok, independent of maas, correct?
<bdx> yea.
<bdx> All nodes pxe boot and commission great when eth0 is connected to my maas-mgmt net
<bdx> If I disco eth0 and connect eth{1,2,3,4,5} and try to commission/boot the node it fails and hangs right after ipmi power on (ipmi has a dedicated interface)
<bdx> The reason behind this is I need to have ceph-public-network == maas-mgmt net so nucs that host services like ceph-mon and ceph-radosgw can communicate on ceph-public-network(10G)
<arosales> ya, I could also see this where folks have mgmt over eth0, but have a high speed nic  for all other traffice.
<bdx> Totally...thats kindof what I have going on...
<arosales> bdx, atm unfortunately I think the list may get you the most eyes
<arosales> juju list as well as maas mail list
<bdx> Totally. Thanks again arosales.
<arosales> I can see what you are trying to do, but don't have a readily available answer :-/
<arosales> bdx, I'll look for the post and encourage others when they are back on-line to reply :-)
<bdx> arosales: Thanks sooo much.....I have been trying to get this stack up for a while now....every bit helps. ThankS!
<arosales> bdx, oh np at all, just wish I had a better answer for right now
<arosales> bdx, but post to the list and I'll be sure to chase it down.
<arosales> bdx, in that post be sure to mention what mass version you are using
<bdx> Entirely.
<arosales> thanks, I think this will be very helpful to others too
#juju 2015-03-18
<wgrant> Can I convince Juju to somehow use an alternate port for SSH to a machine?
<wgrant> I need something other than OpenSSH exposed on a unit's port 22, and it would be nice to not have to use a firewall to NAT that.
<marcoceppi> bdx: I'm able to get maas to boot machines that have something other than eth0
<marcoceppi> (em1, eg)
<AskUbuntu_> MAAS - Cannot provision node with interface other than eth0 | http://askubuntu.com/q/598139
<AskUbuntu_> Juju - Openstack service charm networking configurations and limitations | http://askubuntu.com/q/598156
<stub> wgrant: I have never seen an option to change the ssh port, and if you did it will break 'juju run' since it uses ssh and has no option to use a different port.
<wgrant> stub: Right, I can always change the ssh port manually, but I was wondering about adjusting 'juju run' etc.
<apuimedo> gnuoy: ping
<gnuoy> apuimedo, hello
<apuimedo> gnuoy: Hi!
<apuimedo> that was fast ;-)
<apuimedo> I made a patch that adds midonet support to charm-helpers (similar to the one for calico and n1kv)
<apuimedo> and now I'm trying to do the part for neutron-api
<gnuoy> ah, ok
<apuimedo> *neutron-api charm
<apuimedo> gnuoy: (offtopic, cool how your last name includes a reverse gnu)
<gnuoy> apuimedo, thanks, a lucky chance of fate :)
<apuimedo> the thing is that the midonet plugin needs to write two configs
<apuimedo> whereas the config field for neutron_plugins is just a single string
<apuimedo> we basically need to put some config in /etc/neutron/dhcp_agent.ini as well
<lukasa_work> apuimedo: I think your individual neutron charm will maintain that config
<gnuoy> apuimedo, it sounds like you're going to need to update the quantum-gateway charm
<apuimedo> so I was wondering if I should just add it to neutron_api_utils.py:register_configs or resource_map
<lukasa_work> Oh yes, quantum-gateway is the right one
<lukasa_work>  /headdesk
<gnuoy> :)
<lukasa_work> (FYI, I'm the maintainer of the various Calico charms)
<apuimedo> ;-)
<gnuoy> apuimedo, have you taken a look at the quantum-gateway charm ? It already has n1kv login in the dhcp_agent.ini template if you want to look at an example
<gnuoy> s/login/logic/
<apuimedo> gnuoy: Oh! I'll look into it. I was originally modelling the neutron-api box to have also the metadata and the dhcp agent
<apuimedo> that's why I was targetting the neutron-api charm
<lukasa_work> apuimedo: You can always deploy quantum-gateway on the same machine as neutron-api
<apuimedo> lukasa_work: won't the fact that it is not defined as a subordinate charm prevent that?
<gnuoy> lxc to the rescue
<lukasa_work> apuimedo: Not necessarily. So long as the charms don't step on each others toes you can do it.
<apuimedo> good to know. I assumed that without 'subordinate: true' juju would block it
<lukasa_work> At the moment you can deploy most of the OpenStack 'control' charms to the same node if you want to
<gnuoy> apuimedo, which charm are you saying is a subordinate ?
<apuimedo> gnuoy: no, I was saying that I thought that the fact that neutron-api and quantum-gateway are not subordinates would prevent them from being deployed together in the same box
<gnuoy> ah, I see
<gnuoy> apuimedo, If you want to house multiple services on the same box (to save metal) I'd suggest using lxc fwiw
<lukasa_work> LXCs are definitely safer
<apuimedo> ok ;-)
<lukasa_work> Though as I say I've not had trouble dumping various OpenStack components on the same node
<lukasa_work> I wouldn't rely on that state of affairs to continue
<apuimedo> thanks lukasa ;-)
<lukasa_work> NP =)
<apuimedo> lukasa_work: do you run the metadata agent in the quantum-gateway as well?
<gnuoy> apuimedo, yes. neutron-server is the only thing on the neutron-api charm
<lukasa_work> +1
<apuimedo> ok
<gnuoy> apuimedo, but from a charms point of vies most neutron settings are exposed via the neutron-api charm and it pushes them out to the other charms
<apuimedo> understood
<apuimedo> gnuoy: one thing that left me a bit puzzled about neutron-api is that nsx adds a few configs to config.yaml
<apuimedo> one of them, specifically 'nsx-controllers'
<apuimedo> shouldn't it have been better that those addresses would have been retrieved by adding a relation with them?
 * gnuoy goes and peaks at the charm
<apuimedo> I guess that it's because you can't deploy nsx controllers with a charm, but in my case, where the midonet api endpoint is deployed with a charm
<lukasa_work> I'd add a relation, apuimedo
<lukasa_work> I did that for calico-acl-manager
<apuimedo> I was thinking that probably the best would be to add a relation between midonet-api and neutron-api that uses the interface neutron-api
<apuimedo> and then neutron-api when that relation joins, if it is configured to use midonet, it updates the /etc/neutron/plugin/midonet.ini
<apuimedo> lukasa_work: which relation did you use? Are you using the plain neutron-api charm?
<apuimedo> lukasa_work: I see that you use the regular neutron-api
<apuimedo> but I don't understand how does `juju add-relation calico-acl-manager neutron-api` work
<apuimedo> they do not share any interface that would realize which kind of relation it is fulfilling
<apuimedo> gnuoy: the template for the midonet plugin referred to in http://bazaar.launchpad.net/~celebdor/charm-helpers/midonet/revision/337 charmhelpers/contrib/openstack/neutron.py
<apuimedo> should be in charm-helpers too ( charmhelpers/contrib/openstack/templates/midonet.ini ) or in the neutron-api templates?
<apuimedo> I'd put it into neutron-api/templates/midonet.ini
<AskUbuntu_> Ceilometer deployment | http://askubuntu.com/q/598297
<schkovich> @marcoceppi In Nginx charm description it is mentioned that when "combined with nginx-site, nginx-php, or nginx-python will allow you to deploy independant VirtualHosts and scale those out." I can't find any of mentioned charms. :(
<marcoceppi> schkovich: https://jujucharms.com/u/hp-discover/website/trusty/3 https://jujucharms.com/u/marcoceppi/php-website/trusty/1
<marcoceppi> there is no "nginx-python" yet, that was optimistic of me
<schkovich> ok
<schkovich> those ones are just named differently
<schkovich> i thought that it might be a case :)
<schkovich> @marcoceppi Nginx version installed is 1.4.6 which is affected by several security flaws. Adding option to install Nginx from stable PPA will be great improvement. I checked the code and adding PPA should not be a big deal. Will you welcome PR or in bazaar wording request to merge?
<marcoceppi> I always welcome and appreciate merge requests
<schkovich> @marcoceppi what type of configuration option should be to get check box in gui?
<marcoceppi> schkovich: boolean
<schkovich> of course, what else :(
<my_chiguai> lazyPower: Thanks for the pointer to the elastic search charm. Did you have any notes on what the development process is?
<lazyPower> my_chiguai: what are you looking for in terms of dev process? thats kind of a broad subject.
<my_chiguai> indeed
<my_chiguai> I like the idea of using ansible with juju
<my_chiguai> https://micknelson.wordpress.com/2013/11/08/juju-ansible-simpler-charms/
<my_chiguai> and states
<my_chiguai> https://micknelson.wordpress.com/2013/06/24/easier-juju-charms-with-python-helpers/
<my_chiguai> and of course the starter template
<my_chiguai> https://jujucharms.com/docs/authors-charm-writing
<my_chiguai> My basic path at the moment (and this will be a slow path)
<my_chiguai> is going from github repo, development and testing, and deployment
<my_chiguai> possibly with vagrant: https://jujucharms.com/docs/config-vagrant
<my_chiguai> a lot of moving parts :)
<my_chiguai> were there specific resources you found helpful?
<my_chiguai> I am trying to pull together "The Guide" for me. Maybe make it a realist ( http://readlists.com )
<my_chiguai> s/realist/readlist
<my_chiguai> http://readlists.com/6b321992
<my_chiguai> in progress
<pdobrien> hi @lazypower @asanjar - having a problem getting the hdp-hadoop bundle to install
<pdobrien> looks like it's because it can't verify the package signatures
<my_chiguai> hmm looks like a number of pages on jujucharms.com are broken. The search results are all 404s. https://jujucharms.com/docs/search/?text=amulet
<my_chiguai> some cached links are available
<my_chiguai> http://webcache.googleusercontent.com/search?q=cache:Dkl2vQQqiBkJ:https://jujucharms.com/docs/tools-amulet+&cd=1&hl=en&ct=clnk&gl=us&client=safari
<rick_h_> my_chiguai: apologies, release is in progress the qa site is up atm http://qa.storefront.theblues.io:6543/docs/1.20/tools-amulet
<my_chiguai> no problem and thanks for the updated link!
<my_chiguai> anyone know if the github charm mirrors up to date?
<my_chiguai> on http://askubuntu.com/questions/432187/how-can-i-deploy-my-local-juju-charm-with-amulet-framework a comment by Marco Ceppi noted they were but that was March of last year and elastic search is much more recent than that. :)
<marcoceppi> my_chiguai: they are not
<my_chiguai> ah thanks marcoceppi
<marcoceppi> my_chiguai: we kind of abandonded keeping them in sync as we work towards a better charm store model
<marcoceppi> my_chiguai: you can follow these instructions to convert a bzr repo a git one if that better suites your workflow
<marcoceppi> my_chiguai: http://paste.ubuntu.com/10621984/
<marcoceppi> my_chiguai: you may also need to run "git reset --hard" at the end
<lazyPower> pdobrien: Have you filed a bug? :(
<lazyPower> pdobrien: sorry you ran into that, we've been having some issues with the hortonworks repos as of late. They just came back online after an extended outage last week - and they probably just updated.
<lazyPower> my_chiguai: ah, sorry about the delay in reply. i jsut read scrollback
<lazyPower> my_chiguai: Actually - it was a learn as you go process for me as well. if you're running ubuntu native - going from concept => staging => production is pretty seamless with bundles, if you're using your personal namespace
<pdobrien> lazyPower: did not file a bug yet, wanted to see if it was a known issue.
<pdobrien> lazyPower: I was able to get it to deploy by creating a machine, manually updating the repo key, and then deploying the service
<lazyPower> pdobrien: ah ok, so its just the repository key that has changed?
<my_chiguai> marcoceppi: thanks much!
<pdobrien> lazyPower: not sure if it's changed, I found the key id in the install log, then ran sudo apt-key adv --recv-key --keyserver keyserver.ubuntu.com B9733A7A07513CAD to install
<my_chiguai> marcoceppi: I'll have to look into updating charms
<pdobrien> lazyPower: so not sure if the key changed, or if the charm just isn't looking in the right location anymore
<lazyPower> pdobrien: yeah if you could get a bug filed on that i'll make sure it gets routed to the right people and we get a fix in place quickly.
<pdobrien> lazyPower: on a somewhat related note, I'm trying to deploy the hdp-hadoop-hive-mysql-4 bundle, and the gui says: "Unable to deploy the bundle. The server returned the following error: invalid request: bundle "bundle-deploy" not found"
<pdobrien> lazyPower: will do
<lazyPower> pdobrien: https://bugs.launchpad.net/charms/+source/hdp-hadoop - link for filing the charm bug
<lazyPower> pdobrien: can you link me to the instructions you're following for my clarification?
<pdobrien> lazyPower: for hdp-hadoop?  Just deploying the charm via the gui
<lazyPower> pdobrien: i was referring to the hdp-hive-mysql bundle
<lazyPower> "bundle "bundle-deploy"" sounds like it may be an incorrect copy/paste stanza somewhere
<pdobrien> lazyPower: all I did was find it in the gui and click "Deploy this bundle" - didn't try via cli
<lazyPower> ok, let me stand up an env really quick and investigate, ta for the info
<lazyPower> pdobrien: this bundle, correct? https://demo.jujucharms.com/bundle/data-analytics-with-sql-like-6/?text=sql-like
<pdobrien> lazyPower: I was using https://jujucharms.com/u/lazypower/hdp-hadoop-hive-mysql/4
<lazyPower> pdobrien: ah, thats a percursor to the bundle i listed above.
<pdobrien> lazyPower: just tried the bundle you linked, and I get the same error
<lazyPower> pdobrien: confirmed the bug on my end
<lazyPower> pdobrien: I've filed a bug against juju-gui wrt this bug. If you want to follow along you can subscribe on the right.  https://bugs.launchpad.net/juju-gui/+bug/1433706
<mup> Bug #1433706:  invalid request: bundle "bundle-deploy" not found <juju-gui:New> <https://launchpad.net/bugs/1433706>
<pdobrien> lazyPower: thanks!
<murphyslawbbs> Hi, I'm hitting bug https://bugs.launchpad.net/ubuntu/+source/software-properties/+bug/1089389, I was wondering if there is a workaround and if so how I can implement it. Is there a way to add "local" scripts or code so I can set the proxy?
<mup> Bug #1089389: juju bootstrap fail behind a proxy when a gpg key must be imported <amd64> <apport-bug> <cloud> <precise> <running-unity> <software-properties (Ubuntu):Triaged> <https://launchpad.net/bugs/1089389>
<lazyPower> murphyslawbbs: the only thing that comes to mind is a subordinate - or juju-run against the service.   the subordinate doesn't sound helpful as the deploymetns will fail until the subordinate is run
<murphyslawbbs> lazyPower: is there some kind of order to charms so that the proxy would be set before other stuff runs?
<lazyPower> so juju-run is probably the best bet - and theres no garantee of the run executing before the hooks are fired even if you are judicious about attaching tot he node before the hooks are fired. (small window between agent coming online and hook execution)
<lazyPower> murphyslawbbs: not unless you fork the charms and update the install hook :|
<lazyPower> murphyslawbbs: that is one options however, to do that, and when a fix is released moving back to the store charms with juju upgrade-charm --switch cs:series/service
<murphyslawbbs> lazyPower: ok i'll let that sink in a bit thanks
<lazyPower> murphyslawbbs: i just noticed the bug says this is failing on bootstrap
<lazyPower> thats not going to work either :( if the bootstrap node cant be stood up its pretty much dead in teh water unless you manually provision the node, update the proxy, then attempt to bootstrap to that existing node that has the proxy config setup
<lazyPower> murphyslawbbs: i've done this in the past successfully using maas tagging, and passing that as a constraint.  juju bootstrap --constraints="tags=bootstrap"
<murphyslawbbs> lazyPower: does that somehow alter the maas preseed?
<murphyslawbbs> lazyPower: If so, wouldn't it be a problem is that the deployments inside the bootstrapped machines using lxc wouldn't have the changes?
<lazyPower> negative. its a work around until its properly patched, and isn't easily reproducible since theres a lot of manual intervention dependency in that method.
<lazyPower> ah yeah, that would more than likely be problematic too
<murphyslawbbs> lazyPower: maybe easier all round to create my own trusty image with the keyring and the proxies setup
<lazyPower> yeah, sorry i didn't have a better answer murphyslawbbs
<murphyslawbbs> lazyPower: oh no it's cool, I have an answer :)
<pdobrien> lazyPower: I did a fresh deployment of hdp-hadoop and it appears that the gpg key issue is resolved.... probably was a transient issue due to the issues hortonworks was having.  so I won't open a bug.
<lazyPower> pdobrien: allright, that much is known. Thankyou for trying to repro and following up that its ok.  We've got a known issue with the hortonworks repository dependency is constantly going offline. We'll be working to keep a more reliable mirror in the near future
<lazyPower> There's a launchpad group if you're interested in joining the development efforts in terms of feedback and early releases - would you like a link?
<AskUbuntu_> New Openstack Autopilot stuck at: In progress - Configure availability zones - 98% | http://askubuntu.com/q/598434
<barchetta> anyone solid in juju agent debugging?
#juju 2015-03-19
<VijayTripathi> hello
<VijayTripathi> I am trying to deploy openstack cloud using Juju charms
<VijayTripathi> what is best way to manage charms once they are deployed e.g. make sure a particular service is shutdown/running on a node
<apuimedo> morning!
<lazyPower> Mornin apuimedo
<apuimedo> ;-)
<apuimedo> lazyPower: did you see my email to the juju list?
<lazyPower> I did, giving it some time for the OpenStack team to respond :)
<apuimedo> very well :-)
<apuimedo> It shows that I like more the relation based approach
<lazyPower> I feel that's an intuitive model to adopt. It's a bit more "juju'ish" if thats a word.
<apuimedo> yup :-)
<apuimedo> I'm in love with making as much as possible in relations and as little as possible in configs
<apuimedo> lazyPower: even for openstack version/distribution, for MidoNet charms I made a charm that sets the repos config for its relations to use
<lazyPower> apuimedo: thats one of our core tenants. If you can, distill in the best practice in the relationship.
<lazyPower> Configuration is fine - but its certainly preferred to make it "just work" when people draw the relationships. Part of our selling point is working with u pstream ISV's and getting that level of perf tuning baked into the charm for everyone
<apuimedo> exactly :-)
<apuimedo> I want them to just drag and drop if they want to
<lazyPower> *tenets
<AskUbuntu_> What does Service: storm has neither charm url or branch specified mean? | http://askubuntu.com/q/598665
<arosales> juju docs update, per Canonical IS, they are bringing up the environment right now. If all goes well eta on docs being back up is 3 hours.
<lazyPower> ta for the update arosales
<arosales> lazyPower: np
<AskUbuntu_> which openstack version will be installed with juju? | http://askubuntu.com/q/598768
<rick_h_> arosales: lazyPower docs shold be working
<rick_h_> proper release email will be coming in a bit but DNS is propogating and release finally out and up.
<lazyPower> solid work rick_h_, th anks for the update
<arosales> rick_h_: https://jujucharms.com/docs/1.20/getting-started working :-)
<arosales> with versioning, very nice
<arosales> next feature to land is a drop down to select versions
<arosales> rick_h_: thanks for the update and getting back up and running
<my_chiguai> rick_h_: those are the docs I was trying to use over the last couple of days. I speak for the (silent) users. Thank you!
<rick_h_> arosales: definitely
<rick_h_> my_chiguai: cool
<apuimedo> arosales: ping
#juju 2015-03-20
<AskUbuntu_> juju can not work with openstack juno ( trying for 10 days ) | http://askubuntu.com/q/598994
<AskUbuntu_> A dead lock cycle when using juju "sync-tools" command , any tips? | http://askubuntu.com/q/599003
<AskUbuntu_> how to specify the stream for juju when run juju metadata generate-tools command? | http://askubuntu.com/q/599008
<AskUbuntu_> Is it possible to manually start Juju hooks? | http://askubuntu.com/q/599048
<apuimedo> gnuoy: ping
<gnuoy> hi apuimedo, I'm on the phone at the moment.
<apuimedo> gnuoy: ping me at your earliest convenience ;-)
<gnuoy> will do
<apuimedo> thanks
<gnuoy> np
<gnuoy> apuimedo, hi there, back now
<apuimedo> gnuoy: hi!
<gnuoy> o/
<apuimedo> did you manage to look at the email I sent to the juju list?
<apuimedo> about neutron-api integration
<gnuoy> apuimedo, no, I missed that
<gnuoy> I'll take a look
<apuimedo> thanks
<gnuoy> apuimedo, I'm confused about what consumes the midonet.ini  ? Where does it need to go ? On the compute hosts, neutron gateway or api server (or all of them) ?
<apuimedo> api server
<apuimedo> see: https://code.launchpad.net/~celebdor/charm-helpers/midonet
<apuimedo> I already added it for review a couple of days ago ;-)
<gnuoy> apuimedo, yes, I saw your mp, I'm trying to understand the architecture.
<apuimedo> ;-)
<apuimedo> setting 'midonet' as a plugin makes neutron-api point to midonet.ini
<apuimedo> and the relation (or config in the other version) writes it
<apuimedo> so that neutron-server can use the plugin
<gnuoy> apuimedo, I'll give it some thought and get back to you
<apuimedo> thanks. Looking forward to that a lot :-)
<AskUbuntu_> Using juju with maas worked at first but is now giving 503 error gomaasapi | http://askubuntu.com/q/599157
<arosales> apuimedo: Hello
<apuimedo> arosales: hi
<arosales> apuimedo: I missed your ping yesterday, was there something I could assist with?
<apuimedo> well, it was to ask about the email I sent
<apuimedo> to the mailing list
<apuimedo> I pinged gnuoy about it this morning
<arosales> ah so I should look at the list for background info
<apuimedo> arosales: well, in the ml there's the link to the two approaches
<apuimedo> it's basically a patch to neutron-api
<apuimedo> or rather two different alternative patches to achieve an end goal
<apuimedo> config vs relations
<apuimedo> (I favor the latter)
<arosales> gnuoy: would be a good person to give feedback on it
<arosales> apuimedo: and it sounds like you gave him a ping
<arosales> apuimedo: I'll also check the thread and see if we can give some feedback there. Thanks for the reminder ping here though :-)
<apuimedo> I did. He told me he'd think about a bit and get back to me
<apuimedo> thanks ;-)
<gnuoy> apuimedo, In the long term I think a subordinate charm is probably the answer but we need to review the neutron-api  charm and do some refactoring first. So, please go for the simple config option approach (like n1kv) for the moment
<apuimedo> gnuoy: okay. Thanks
<apuimedo> So I guess I can submit it for review already, right?
<gnuoy> apuimedo, have they been tested in oil ?
<apuimedo> oil?
<gnuoy> http://partners.ubuntu.com/partner-programmes/openstack
<apuimedo> gnuoy: first time I see this :P
<hatch> is it possible to execute a hook script in the 'hook environment' manually?
<schkovich> @hatch https://jujucharms.com/docs/1.20/authors-hook-debug
<hatch> schkovich that doesn't allow you to execute them at any time
<apuimedo> gnuoy: https://code.launchpad.net/~celebdor/charms/trusty/neutron-api/midonet_resynced
<hazmat> hatch: yes
<hazmat> hatch: juju-run
<hazmat> hatch: on the unit.. or from a client $ juju run .. its not quite solid for  relation hooks but its usable
<hatch> hazmat: does that execute it in the 'hook environment' though?
<hatch> or just as a normal user
<hazmat> hatch: in a hook env
<hatch> oh cool I didn't know that - thanks
<hazmat> hatch: somebody a script to the list (stub?) that uses it to dump all relation / config data for a unit
<hatch> oh interesting
#juju 2015-03-21
<swr> any juju devs around?
<swr> hmmm, I'm betting you're all asleep
<swr> Well, just to let you know, in your latest stable package, OpenStack support is completely broken.
<skay> o/
#juju 2016-03-21
<axino> cory_fu: thanks !
<bkerensa> Hello
<bkerensa> I have been getting an error "ERROR cannot assign unit "mysql/0" to machine: cannot assign unit "mysql/0" to new machine or container: cannot assign unit "mysql/0" to new machine: use "juju add-machine ssh:[user@]<host>" to provision machines"
<bkerensa> when trying to deploy mysql on a manual environment running wiley
<bradm> bkerensa: did you try doing what it suggested?
<lazyPower> magicaltrout - i replied to the bug. i dont see why open file descriptors would cause a difference one way or another
<lazyPower> i feel like this was an unrelated
<lazyPower> "fix" that just happened to work
<magicaltrout> lazyPower: i replied to you, I think you read or wrong, or I just didn't explain it very well :P
<lazyPower> Ah, that makes sense
<lazyPower> yeah, if we tweak that upstart job it'll do us some justice. We'll need to re-verify once xenial lands and we convert that to a systemd job
<lazyPower> but for a hotfix, i'm +1 to setting that as default so we're g2g on lxd as well as public clouds
<lazyPower> magicaltrout - did you happen to have a patch for that? or was it all manual investigation/fix?
<jamespage> morning all
<lazyPower> o/ jamespage
<lazyPower> jamespage - can i steal your eyeballs for a minute before you get into full swing?
<jamespage> lazyPower, sure
<lazyPower> jamespage - before we begin, this is what we are visualizing - http://i.imgur.com/ABw9G9r.png
<jamespage> lazyPower, okies
<lazyPower> all the funny little subordinate units are Elastic Beats - the replacement for "logstash forwarder"
<jamespage> purple green grey and blue right?
<lazyPower> but now, they are more like FluentD, carbon, et-al - it collects and streams system metrics along with log data.  one beat per focus group - topbeat (distributed htop), filebeat (log files), packetbeat (network protocols), and dockerbeat
<lazyPower> yep
<lazyPower> here's the topbeat dashboard that they have in a demo bundle - http://54.80.82.242/app/kibana#/dashboard/Topbeat-Dashboard
<lazyPower> you'll see you can click into hosts, and drill down
<lazyPower> its *somewhat* interactive
 * jamespage looks
<jamespage> this is neat
<lazyPower> I've got this mostly functional Its the dashboarding part thats going ot wreck my free time
<lazyPower> there no dash for filebeat, and the packetbeat shipping doesn't appear to be finding anything on the consul http/dns interface :\
<lazyPower> but topbeat looks awesome!
<jamespage> lazyPower, its like ganglia and nagios combined...
<lazyPower> kinda, there's no notion of alerting in here
<lazyPower> i'm pretty sure with some queries, attached to a webservice, that could be changed
<jamespage> lazyPower, okies..
<jamespage> this all looks super useful
<lazyPower> jamespage - i think this has implications with our big data bundles, as we can do this one of two ways - river from ES to HDFS for cold storage, or route through logstash to split messages into dashboard and cold storage.   Think having this in OIL while running the tests. We can reproduce a visualization of the hosts under load during testing.   Long term compute jobs matched with what merlijin and team are doing for "common infrastructure
<lazyPower>  problems", match that to machine metrics as well *shrug*
<lazyPower> im no data scientist, but i think we've stumbled into something useful that applies everywhere for telemetry
<jamespage> lazyPower, hmmm
<jamespage> lazyPower, aggregating general telemetry, log data et al into a single place allows for some interesting analytics certainly
<lazyPower> its that or i'm super late to the party that fluentd/hekka/carbond/statsd have been having for a while
<jamespage> lazyPower, tbh this is a bit of a gap with the openstack charm set right now; I'd love to integrate with something like this as well...
<lazyPower> its all juju-info based, and the full config is available for tweaking in the layer
<lazyPower> this is like concept release quality right now
<lazyPower> but it should take little to no time to get running on an openstack deploy
<jamespage> lazyPower, that's fine...
<lazyPower> juju deploy ~containers/development/bundle/beats-core
<jamespage> lazyPower, this is all based around https://www.elastic.co/products/beats ?
<lazyPower> yep
<lazyPower> the dash you're looking at is available as an action on the kibana charm
<lazyPower> juju action do kibana/0 deploy-dashboard dashboard=beats
<lazyPower> all the agents self-register with an index upon relating to elasticsearch, i think that pretty well covers it
<lazyPower> feedback / flames welcome :)
<lazyPower> oh, and all the repos are in github.com/juju-solutions
<jamespage> lazyPower, I'll try find some time to give it a spin...
<lazyPower> completely understand :)
<lazyPower> just wanted to give you an early access peek, see if you're intersted in being a stakeholder
<jamespage> lazyPower, deadline at the end of the week so maybe next week - but it does look useful
<jamespage> lazyPower, our monitoring/metric/log solution is a little fragmented right now - so this might be something we target for next cycle - but that sounds about right timing wise to me...
<jamespage> if this is concept right now...
<lazyPower> oh man i cant wait to get this to the list :P
<jamespage> lazyPower, I like to look of this - https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-configuration.html
<jamespage> just suck up everything baby!
<lazyPower> Yeah!
<jamespage> its super simple
<lazyPower> and it does an amazingly good job too
<blahdeblah> jamespage: are you the right person to pester about https://code.launchpad.net/~paulgear/charms/trusty/ntpmaster/sync-charmhelpers/+merge/289605 and https://code.launchpad.net/~paulgear/charms/trusty/ntpmaster/execd-support/+merge/289609 ?  Your name is on it. ;-)
<jamespage> blahdeblah, eek!
<blahdeblah> jamespage: I'll take that as a yes. ;-)
<jamespage> I'll stick it on my list - just working the SRU backlog at the moment
<jamespage> they had already dropped into my inbox tbh...
<blahdeblah> cool - thanks
<blahdeblah> Just wanted to make sure they would get looked at.  At the moment I'm having to work from a fork, which is always sad.
<gnuoy> jamespage, I'm hitting a problem running the xenial mitaka amulet tests for nova-cloud-controller. Am I right in thinking that that should work to the best of your knowledge?
<jamespage> gnuoy, they should do yes
<jamespage> well they did last week at least...
<jamespage> xenial maybe foobarred outside of charms...
<jamespage> thats quite possible
<gnuoy> jamespage, no, this is:
<gnuoy> 2016-03-21 09:48:20.269 29291 ERROR nova.api.openstack.extensions CantStartEngineError: No sql_connection parameter is established
<magicaltrout> lazyPower: no patch i'm afraid, just hacked it and passed out
<magicaltrout> onsite at a client, I can dump something in this evening, or you can push the 1 liner
<lazyPower> I'll try to circle back, but if you want to make the PR i'm happy to wait for a link
<lazyPower> would be a big +1 if you made the PR and had associated tests :)
<magicaltrout> okay i'll send something over this evening
<magicaltrout> currently looking at a clients project plan for a tool they've got another consultancy migrating from sql server to hadoop
<magicaltrout> 19 days for Amazon EMR prod env setup
<magicaltrout> should be using juju :P
<lazyPower> +1 to that sentiment
<magicaltrout> or just employ someone more competent ;)
<magicaltrout> Xenial absolutely hoses my laptop battery, I hope that gets better before release ;(
<lazyPower> The only laptop i've ever had good battery life under linux is my XPS13
<lazyPower> which is in a sad state these days :/ I think i tanked the ssd
<magicaltrout> my x1 carbon has a reasonably decent battery but it lasts about 50% of the time in Xenial than it did in Trusty
<magicaltrout> but the problem with all this slim laptops is I can't swap it, or add wedge on
<lazyPower> Yeah
<jamespage> gnuoy, the fix for that should have landed in the master branch already
<jamespage> gnuoy, just testing now
<jamespage> lazyPower, do you ever sleep?
<jamespage> :-)
<gnuoy> The fix for " No sql_connection parameter is established" ?
<gnuoy> jamespage, do you mean the master branch of the charm ?
<lazyPower> jamespage - my sleep schedule is completely borked. i wake up at like 3am and pass out around 8. But the level of work i can get done in this morning wake is off the hook
<jamespage> gnuoy, yes
<lazyPower> i'm seriously considering making this my permanent schedule.
<magicaltrout> I went through years of starting work at 4am
<magicaltrout> get far more done
<magicaltrout> kids stopped that plan, for now at least
<jamespage> gnuoy, I just deployed master branch changes with xenial-mitaka OK
<gnuoy> jamespage, ack, I'll rebase my branch
<lazyPower> my only dependent is furry and doesn't understand the concept of cat vs computer
<magicaltrout> hehe
<lazyPower> on the up side he doesn't disturb me when he's going into spaz mode at 4am on this schedule
<jcastro> lazyPower: so I debated capitalizing MUST and SHOULD
<jcastro> but then the entire thing becomes hard to read
<jcastro> I was thinking maybe instead is under each header
<jcastro> saying something like "everything below MUST"
<jcastro> and then "everything below SHOULD"
<jcastro> something that makes it obvious without punching your eyeballs in the face with capitalization every line
<lazyPower> Separate it into terms that parse well with proof. Make them errors and warn level events.
<lazyPower> shoulds = warn
<lazyPower> must = error
<jcastro> oh I see
<jcastro> dude, that's brilliant.
<jcastro> I will work on this
<lazyPower> right on :D
<marcoceppi> lazyPower: re:amulet
 * lazyPower is all ears
 * marcoceppi looks at code
<lazyPower> for context: i'm puzzled by an amulet nuance, i can get the relationship information from one direction -  but not the same relation in reverse. https://gist.github.com/anonymous/b252865ec752459f01f8#file-10-deploy-with-logstash-L22
<lazyPower> looking at metadata, i have no idea how this actually worked in the first place, the relation name is beat, not filebeat
<marcoceppi> lazyPower: so does that line work?
<lazyPower> its returning the IP address of the unit on the other end of that relation, when i reverse the params -   self.unit.relation('filebeat', 'logstash:beat') (or logstash:filebeat according to this) - it yields that the relationship is not fond
<lazyPower> *found
<marcoceppi> lazyPower: it's the relation as scoped to the unit
<beisner> lazyPower, jcastro +1 and a smile wrt warns being nonfatal, and scooting fatal issues to error.
<lazyPower> so filebeat, filebeat:logstash?
<lazyPower> beisner <3
<marcoceppi> lazyPower: so if you want logstash's side
<marcoceppi> lazyPower: self.d.sentry['logstash'].relation('<LOGSTASH_METADATA_REL_NAME>', 'filebeat:logstash')
<marcoceppi> grr
<marcoceppi> lazyPower: self.d.sentry['logstash'][0].relation('<LOGSTASH_METADATA_REL_NAME>', 'filebeat:logstash')
<marcoceppi> lazyPower: everything is scoped to the unit you're calling relation on
<beisner> on that note - we want to prep *os-charms with the min-juju-version metadata as soon as practical ahead of 16.04.  do we know when charm proof will allow that?
<marcoceppi> lazyPower: http://pythonhosted.org/amulet/amulet.html#amulet.sentry.UnitSentry.relation
<marcoceppi> beisner: as soon as you open an issue on the repo
<lazyPower> ah, duh
<lazyPower> right on, thanks marcoceppi
<marcoceppi> lazyPower: we can probably do a smarter job of this command, but I haven't found a nicer UX for it yet
<lazyPower> well what bit me was i aliased self.unit to that specific sentry
<lazyPower> i didnt even trip that i was calling this on filebeat getting the data from its perspective
<lazyPower> i jus tknew i had the wrong ip, and was trying to backtrace to where i needed that scope to change. obv the params weren't the issue :P
<lazyPower> thats it, self.d.sentry['logstash'][0].relation('filebeat', 'filebeat:logstash')
<lazyPower> now to figure out where this stale artifact came from that i have in our namespace
<tvansteenburgh> what's the best way to determine whether i'm juju1 or juju2 from a running unit?
<beisner> marcoceppi, ack thx, raised :)
<marcoceppi> tvansteenburgh: from a deployed unit? there really is no way
<tvansteenburgh> marcoceppi: i was hoping to make benchmark-gui smart enough to work on either w/o being told
<tvansteenburgh> i guess i could just try both and see which works
<marcoceppi> tvansteenburgh: well, you've got full api access, can't you prob or try login v1 then do login v2 then error?
<tvansteenburgh> gmta
<jamespage> beisner, I'm doing a pull request to add extra-bindings and series anyway
<jamespage> what's the juju-min-version key?
<rick_h_> jamespage: https://docs.google.com/document/d/1ID-r22-UIjl00UY_URXQo_vJNdRPqmSNv7vP8HI_E5U/edit min-juju-version
<rick_h_> with the notes/etc
<jamespage> rick_h_, ta
<jamespage> marcoceppi, raised another two issues for new metadata.yaml top-level entries charm-tools needs to support
<jamespage> I'll swap you for some package reviews/uploads...
<jamespage> :-)
<aisrael> marcoceppi: Have you ever seen this apt error before? http://pastebin.ubuntu.com/15463740/
<aisrael> Has anyone successfully upgraded an environment from juju 2 beta 1 to beta2? `juju upgrade-juju` just reports no upgrades available.
<marcoceppi> aisrael: you need to use a different stream
<marcoceppi> aisrael: that apt error is a known one
<marcoceppi> jamespage: pushing the packages to a ppa without backportpage atm
<marcoceppi> jamespage: what is "extra-bindings" ?
<jamespage> marcoceppi, network space bind points which don't relate to actual relations...
<marcoceppi> jamespage: interesting, okay
<marcoceppi> jamespage: are they just a dict of dicts?
<jamespage> marcoceppi, example - https://github.com/javacruft/charm-neutron-api/commit/665c34bca503edf80c0a5c108b2cf335ec48bcb1
<marcoceppi> jamespage: ack, ta
<marcoceppi> jamespage: we'll have an initial set of packages today, but charm-tools and charm will be ready tomorrow
<jamespage> marcoceppi, ppa location?
<marcoceppi> jamespage: https://launchpad.net/~marcoceppi/+archive/ubuntu/xenial-chopper I just started running dput against that a second ago
<aisrael> marcoceppi: could you point me to any docs on streams in 2.0? My google-fu is failing.
<lazyPower> aisrael https://lists.ubuntu.com/archives/juju/2016-February/006618.html
<lazyPower> this does however make reference ot environments.yaml which doesn't exist in 2.0 :\
<aisrael> lazyPower: Yeah, that's where I'm stuck.
<aisrael> `juju get-model-config` doesn't have an agent-stream key
<lazyPower> cherylj ping o/
<aisrael> wait, yes it does
<lazyPower> oh maybe an unping is in order then
<cherylj> hey lazyPower, what up?
<aisrael> So I have agent-stream: devel. I still can't upgrade my environment to beta2, though
<lazyPower> Hey, do you know where we stuff the stream info in juju 2.0 to set streams?
<aisrael> tl;dr;, upgrading from juju 2 beta1 to beta2 isn't working for me
<marcoceppi> aisrael: maybe just destroy environment?
<aisrael> marcoceppi: Probably faster at this point :/
<marcoceppi> aisrael: I don't think we support upgrades from betas
<aisrael> marcoceppi: ahhh. That's good to know.
<lazyPower> unping cheryl, thanks for responding!
<arosales> ~charmers it looks like thedac has gotten all the necessary +1 and then some for his ~charmer application
<arosales> congrats thedac
<thedac> arosales: thanks \o/
<lazyPower> woooooo
<lazyPower> \o/
<lazyPower> congrats thedac
<arosales> any ~charmers care to comment on the thread and him introduced to ~charmer responsibilities ?
<lazyPower> beisner - we still need to onboard you right?
<marcoceppi> aisrael: tehre are a few that need to be onboarded still
<marcoceppi> arosales: ^
<marcoceppi> we'll do them all at once
<arosales> marcoceppi: sounds good.
<lazyPower> its time to assemble!
<lazyPower> CHARMER TEAM YOOOOOOOOOOO
<arosales> marcoceppi: lazyPower perhaps an official reply on the juju list would be good too
<aisrael> charmer-voltron is coming together nicely
<lazyPower> openstackers: question for you re amulet testing. If i'm onboarding an ISV and say they dont use the openstack-origin config, is the proper way forward here to submit a MP against the charmhelpers.contrib.openstack.amulet.deployments.py class and add themselves? or is this already handled elsewhere?
<beisner> lazyPower, yes plz
<lazyPower> was that re: amulet or re: onboarding?
<lazyPower> :D
<beisner> lazyPower, onboarding.
 * beisner digests the other ?
<beisner> lazyPower, do you have an example/link?
<lazyPower> i have one better, let me forward over the mail
<beisner> cool thanks
<marcoceppi> tvansteenburgh: you got a few mins? Hate to distract but I've got some charm-tools questions
<tvansteenburgh> marcoceppi: sure
<beisner> lazyPower, yes, mp @ c-h for exactly that.   it's necessary to exclude them from the automagical flux capactitor charm test configuration foo.
<marcoceppi> tvansteenburgh: I'll see you in eco-wx
<lazyPower> thats what i was thinking, but the confirmation is nice :)
<beisner> if they're using the openstack amulet helper that is
<lazyPower> thanks beisner
<beisner> lazyPower, yw sir
<jamespage> marcoceppi, erm parse == python-parse?
<marcoceppi> jamespage: yes, parse is the soruce package for python-parse and python3-parse
<jamespage> marcoceppi, problemo - we already have python-parse in distro
 * jamespage looks
<marcoceppi> jamespage: what? where?
<jamespage> marcoceppi, I take it you need the py3 support
<jamespage> ?
<marcoceppi> I freakin searched everywhere for it
<jamespage> marcoceppi, try "rmadison python-parse"
<jamespage> -- Cyril Bouthors <cyril@bouthors.org>  Mon, 11 Nov 2013 15:37:03 +0100
<jamespage> been there a while...
<marcoceppi> bleh!
<marcoceppi> 1.6.3 is so old though
<marcoceppi> 1.6.6 was almost two years ago
<marcoceppi> jamespage: we don't actually need py3, charm-tools is still only py2
<marcoceppi> jamespage: on a fresh xenial machine though, it couldn't find python-parse
<jamespage> marcoceppi, I can rev it - leave that one with me
<marcoceppi> jamespage: thanks, we need python(2)-parse >= 1.6.6
<marcoceppi> jamespage: I'm uploading the next one, there's about 5 that aren't in archive today (except for charm and charm-tools) so 7 total
<marcoceppi> jamespage: I have a few packages with wily as their target, should I just upload those as is to the ppa or bump them to xenial?
<jamespage> marcoceppi, ok python-parse updated in xenial
<marcoceppi> jamespage: \o/ thank you
<jamespage> marcoceppi, bump the target
<marcoceppi> jamespage: just dch -i or is there a better way?
<jamespage> marcoceppi, that's fine for now - i'll tidy as I upload...
<marcoceppi> jamespage: going to pm right quick
<jcastro> kwmonroe: http://2016.texaslinuxfest.org/call-for-papers
<c0s> there seems to be a discrepancy between the command syntax in 2.0 and their description in here. At least
<c0s> juju add-credential
<c0s>   is listed as
<c0s> juju add-credentials <cloud> -f creds.yaml
<c0s> The plural form isn't getting recognized by the software
<kwmonroe> yeah c0s, that's gotta be a typo in the 2.0-beta2 release notes.  probably copy pasta from autoload-credentials (plural) to add-credential (singular).  'add-credential <cloud> -f creds.yaml' is what works for me.
<kwmonroe> jcastro: ack on the TLF cfp
<c0s> yup, it does for sure. The dev's docs you sent me got it right.
<kwmonroe> obey the release notes, devel docs, and #juju (pick 2) ;)
<c0s> indeed
<c0s> hey jamespage.... Long time no see ;)
<c0s> This is Cos from Bigtop
<jamespage> hey c0s - indeed a long time!
<c0s> good to see you around and sticking to the same guns ;)
<jamespage> you bet...
<marcoceppi> rick_h urulama tvansteenburgh when a charm is pushed to the store, will the push command validate series at push time? what happens if a malformed series is added?
<urulama> marcoceppi: as in you pust trusty but it's xenial only?
<marcoceppi> urulama: as in the metdata.yaml says "- whatevetr"
<tvansteenburgh> as in you typoed "tursty"
<urulama> it should, yes
<marcoceppi> urulama: but, will it?
<tvansteenburgh> hehe
 * urulama goes and tries before saying yes, just in case
<marcoceppi> tvansteenburgh: I think here we just need to make sure that the series is a valid list, otherwise we'll always be behind with supported series
<tvansteenburgh> marcoceppi: okay
<urulama> marcoceppi: it will fail, yes, with a stupid error though ... i'll add a task for fix
<marcoceppi> urulama: <3 thanks
<marcoceppi> tvansteenburgh: push will gate that, we just need to check formatting
<aisrael> Deploying a local charm in beta2 is hanging - any ideas? http://pastebin.ubuntu.com/15465568/
<tvansteenburgh> marcoceppi: ack, trying to find out if list is required, or if it can be a string instead if you only support one series
<urulama> marcoceppi, tvansteenburgh: verified for both normal and multi series charms ...
<marcoceppi> urulama: <3 thanks!
 * urulama now goes: how could you doubt charm store!!! :)
<aisrael> Looks like the deploy is triggering this: ERROR juju.apiserver charms.go:114 returning error from POST /model/df5bae38-c251-4ee9-8334-ed014ae6fb80/charms?%3Amodeluuid=df5bae38-c251-4ee9-8334-ed014ae6fb80&series=trusty: [{github.com/juju/juju/apiserver/charms.go:68: } {error processing file upload: unexpected EOF}]
 * aisrael goes off to file a bug
<lazyPower> aisrael - was your debug-log spewing errors about hashsum mismatch?
<lazyPower> thats the only bug i've seen thats kept my local charms stuck in pending
<aisrael> lazyPower: Nope. Fat charms that are too fat won't deploy.
<lazyPower> these aren't fat :/
<aisrael> 22M works; 250M doesn't
<lazyPower> its some weird transient thing i only ever seem to hit with local charms
<lazyPower> and usually an upgrade-charm triggers it back into being happy
<aisrael> the only hashsum issues I had was updating packages in the vagrant image
<aisrael> kwmonroe: mbruzek: do either of you know of a big fat charm offhand (one of the ibm ones or big data ones, maybe)?
<lazyPower> there was a really fat one amir nd i worked on that was ~ 190 mb fatpacked
 * lazyPower digs
<kwmonroe> aisrael: easiest thing to do is "dd if=/dev/zero of=<charm_dir>/awheck bs=1M count=250" to make a local charm fat.
<aisrael> kwmonroe: Hey, good idea!
<lazyPower> even better
<lazyPower> kwmonroe - did you try to replace java with 250mb of /dev/zero data?
<kwmonroe> sure did lazyPower, and no-one cared...
<kwmonroe> ZING!
<c0s> LazyPower, I am sure you won't notice a thing after that - will work just the same ;)
<mbruzek> aisrael: https://jujucharms.com/websphere-liberty/trusty/
<aisrael> mbruzek: Excellent, sir!
<mbruzek> aisrael: That one is not too bad, but the binary for the sdk and liberty is in there.
<mbruzek> So not too fat, but the binaries are in the charm.
<magicaltrout> lazyPower: I'm fixing this kibana init script but I don't know what I can do test-wise, I can define a test that only runs on LXD can I?
<lazyPower> hmm, good point
<lazyPower> you really cant as the substrate is defined by the runner agent, either CI, or myself as the driver.
<lazyPower> and i can see someone clicking "deploy on AWS" and that test becomes noisy then
<lazyPower> magicaltrout - point taken, one liner update it is!
<magicaltrout> alrighty, I'll just check it fixes it locally then shunt it up
<arosales> I was working on bootsraping AWS with beta2 and I got
<arosales> http://paste.ubuntu.com/15466576/
<arosales> so I had to create a cred.yaml file and do
<arosales>  juju add-credential aws -f cred.ayml
<arosales>  juju add-credential aws -f cred.yaml
<arosales> is this a known issue that the interactive bits aren't there yet
<marcoceppi> jcastro: fiche is updated, but it's old school ingestion, may take a hot min to show up
<rick_h_> arosales: yes, the team i sstill working on it
<rick_h_> arosales: some branches went by last week
<arosales> rick_h_: thanks
<arosales> c0s: ^ fyi  -- using a cred yaml file for me I was able to deploy the realtime analytics bundle
<c0s> well, looks like something is different about the accounts.
<c0s> I have setup awscli and about to try to start a micro instance to see what happens
<arosales> c0s: http://paste.ubuntu.com/15466644/ is what worked for me
<c0s> ah, looks like it is past the error message. Perhaps Marco did something to my account ;)
<c0s> it is spinning up an instance now
<arosales> c0s: great
<c0s> yup, doing the apt-get update and all shenanigans
<c0s> good, all set now I guess
<marcoceppi> c0s: so Amazon says we have up to 100 instances per region, but I suspect they're lying. us-east-1 was at 20, it's down to 10 now. I think trying us-west-1 instead might work
<c0s> yeah, could be the case too.... I will switch to the local one: looks like I am the only one here anyway ;)
<c0s> pardon my ignorance, when I have bootstrap'd a controller, I see that juju creates the ssh keys for me but looks like I can not ssh into the bootstrap instance using those.
<c0s> perhaps should use different user name not the ec2-user@ as usual?
<c0s> could not find any documentation on that....
<rick_h_> c0s: as the ubuntu user?
<rick_h_> c0s: ubuntu@hostname
<rick_h_> c0s: and theres a
<c0s> that's better! :)
<c0s> thanks
<rick_h_> `juju ssh` command as well
<rick_h_> c0s: ubuntu@ is the unbuntu cloud image standard account across clouds
<marcoceppi> tvansteenburgh: as an fyi: https://github.com/juju/charm-tools/issues/141#issuecomment-199437756
<c0s> make sense, thank you!
<c0s> rick_h: one more question I guess (in the next 30 minutes at least): does bootstrapping process bring up an interface similar to that of demo.jujucharms.com ?
<c0s> or it is all CLI at this point?
<rick_h_> c0s: it will but for now you have to juju deplpy juju-gui
<c0s> ah, self-contained... I like it. Thank you!
<rick_h_> c0s: it's a new feature.landing in 2.0 but not there yet
<rick_h_> c0s: make sure you juju expose.
<rick_h_> after.deploy
<c0s> ok, doing this right now
<marcoceppi> lazyPower: hey, everything LGTM on the kubernetes PR
<marcoceppi> had a question about etcd, but it's not major
<lazyPower> sure, whats up?
<marcoceppi> lazyPower: it's on the pr
<marcoceppi> also, is there any documentation about charm build or installing charm-tools ?
<lazyPower> ah, no, that was the other card that i haven't done yet
<marcoceppi> lazyPower: ack, then this LGTM pending your throughts about ~containers
<magicaltrout> if the lxd bootstrap node goes missing you can't kill the environment \o/
<jcastro> define "goes missing"
<jcastro> like, destroying it out from under juju?
<magicaltrout> well I don't know whats actually wrong with my server, probably some borked routing but
<magicaltrout> https://gist.github.com/buggtb/3ebe3e02bc7b2d3479a9
<magicaltrout> surely kill should flatten it regardless
<magicaltrout> that said, maybe my build is just out of date, haven't pulled a new one in a week
<lazyPower> magicaltrout - i've had that happen
<lazyPower> the only resolution was to clean up the cache.yaml in your $JUJU_HOME
<magicaltrout> banging
<magicaltrout> kill should kill it regardless of connectivity id have thought?
<lazyPower> i feel like kill-controller should do that for me when i'm banging a --force flag on kill-controller, which is already a RBFH
<lazyPower> yep, my sentiments exactly
<rick_h_> lazyPower: magicaltrout file a bug?
<lazyPower> already on it
<rick_h_> especially if you can reproduce it
<rick_h_> kill-controller does try to play nice, but should bypass the api server if it's not there/times out
<magicaltrout> i'm eating steak, I have priorities! :P
<rick_h_> magicaltrout: fair enough :)
<lazyPower> https://bugs.launchpad.net/juju-core/+bug/1560191
<mup> Bug #1560191: kill-controller is hinky without a model-controller behind it <juju-core:New> <https://launchpad.net/bugs/1560191>
<lazyPower> no taxation of the feels without representation of the bugs rick_h_  :)
<rick_h_> lazyPower: lol, "hinky" ?
<aisrael> <3 hinky
<lazyPower> yeah, i didnt know what else to call it, as its not exactly expected, but it works most of hte time
<magicaltrout> lazyPower: launchpad confuses the life out of me
<magicaltrout> but the activity log says i've linked my branch
<magicaltrout> with a 1 line patch
<lazyPower> link me sir
<magicaltrout> https://bugs.launchpad.net/charms/+source/kibana/+bug/1539806
<mup> Bug #1539806: [ARM64][LXD Provider][ 2.0-alpha1-0ubuntu1~16.04.1~juju1] kibana 'hook failed: "install"' <juju-core:Invalid> <kibana (Juju Charms Collection):New> <https://launchpad.net/bugs/1539806>
<magicaltrout> http://bazaar.launchpad.net/~spicule/charms/trusty/kibana/trunk/files
<lazyPower>  bug *and* a fix?
<lazyPower> man, thats awesome!
<magicaltrout> well i can't yet fix go stuff, but i can hack around upstart scripts until they work ;0
<lazyPower> :D
<c0s> do I understand correctly, that Juju doesn't support in-flight changes of the relations?
<c0s> Say, if I want to move a service from it's own dedicated node to collocate with another service. It is somewhat equivalent to scaling a cluster up and down, but not exactly, per se
<magicaltrout> f
<magicaltrout> fail
<magicaltrout> you do understand correctly c0s, AFAIK you need to shut it down and redeploy if you want to move your unit from one machine or location to another
<c0s> I see. But I should be able to add/remove slaves as need, right?
<c0s> (presumably, it will be fun on a busy HDFS cluster though)
<magicaltrout> yeah, juju add-unit will add you new slaves
<c0s> cool, that's what I thought. And I guess at his point I can co-locate new unit with an existing one
<c0s> thanks for the clarification magicaltrout!
<magicaltrout> no worries c0s
<magicaltrout> you actually charming bigtop then c0s or just playing?
<c0s> at this point I am just playing but who knows ... ;)
<magicaltrout> cool
<c0s> One thing I don't get completely, is the fact that Juju has to build/publish its own set of the binary tarballs, then add all this metadata on top instead of resorting to existing Apache-proper data stack with its native packaging.
<c0s> and of course I refer here to Bigtop ;)
<magicaltrout> yeah but this is canonical you're talking about, they just reinvent stuff because they can ;)
<magicaltrout> Take Wayland/Mir ;)
<c0s> lemme check on that ;)
<c0s> ah, that... yesh
<c0s> Although I like Unity, really
<magicaltrout> regardless of distro the first thing I generally do is remove the default WM ;)
<rick_h_> c0s: happy to talk decisions sometime if yulou're.interested
<rick_h_> bah sorry phone typing
<c0s> yes, very
<c0s> wrapping up for today - had a good progress and a lot of new things were learnt ;)
<rick_h_> c0s: arosales worth setting up a call there?^
<c0s> or if you feel like doing it today - I would ready in 15: need to walk the dog
<rick_h_> arosales: to help bootstrap c0s on why things.work the way tyey do?
<rick_h_> c0s: dinner time.here, but think arosales can setup.something this week
<magicaltrout> c0s: you doing apachecon? i'm trying to round up a loose band of Apache folk interested in juju to sit down and have a beer/chat at some point
<c0s> I don't know yet... planning on, but it will depends on how business going for me ;)
<magicaltrout> fair enough!
<c0s> I am not a part of any big corp now, so paying for all the events myself
<magicaltrout> yeah the first apachecon i went to i self funded, then last year I was supposed to apply to TAC and accidentally signed up to LF travel sponsorship
<c0s> would be great to have a chat like that of course. I had some face2face time with some of the Juju folks during Scale14x down in LA, and it was good
<magicaltrout> i was lucky i still got accepted ;)
<c0s> well, for the last three years I was a VP of open source development for a company. With my own budget and all ;)
<c0s> But I think I got tired of corp-stuff, really
<magicaltrout> I assumed you were romans actual right hand man
<magicaltrout> not just a twitter right hand man ;)
<c0s> I never been to Pivotal man ;)
<c0s> Roman is their Director of OS; I was at a completely different company
<magicaltrout> yeah, he has his fingers in many open source pies
<c0s> we never lended a hand to each other ;)
<c0s> rick_h_ understood. Let's do a call when something works for all timewise
#juju 2016-03-22
 * arosales reads back scroll
<magicaltrout> its not very complex
<magicaltrout> arrange meeting with c0s
<arosales> c0s we can install the payload from big top and something we should think of for power8
<arosales> c0s the mechanism to easily change the payload is resources
<arosales> The main thing is to ensure the target install source is always available or the user gets and install hook error and thinks it was juju not the target install host
<arosales> c0s we can also scale services with add unit and remove unit we just need to model that correctly in the charm
<arosales> But good questions
<magicaltrout> he's also very much left :P
<arosales> I would be interested in know more about your use cases
<arosales> magicaltrout: no wonder auto complete wasn't finding him
<arosales> :-)
<magicaltrout> hehe
<arosales> rick_h_: I'll catch up tomorrow when cos is back in channel
<rick_h_> arosales: rgr
<rick_h_> arosales: ty
<magicaltrout> you should leverage his friendship with Roman to get pivotal onboard :)
<magicaltrout> mostly because it would get me out of writing a fat bunch of charms that I don't have time to write but really want :)
<axino> cory_fu: https://github.com/juju-solutions/layer-basic/pull/49 is broken I think, the apt_install doesn't silently fail on trusty
<magicaltrout> lazyPower: let me know when the kibana stuff gets merged so I can test beats again! :)
<jamespage> gnuoy, quite a big one - https://review.openstack.org/#/c/295714/
<gnuoy> glad to see it though
<dimitern> jamespage, hey when you can I'd like to investigate the issue from yesterday with the maas multi-nic containers
<jamespage> gnuoy, and its buddy - https://review.openstack.org/#/c/295745
<stub> I've got a production environment running 1.25.4, proposed stream. Future upgrades should hopefully only be to official releases. Can I set the agent-stream back now, or do I need to wait until I upgrade?
<skay> I have an environment where a charm radically changed and when we ran upgrade-charm I don't think it cleaned up all the old state from the previous charm
<skay> what's the best way to get rid of that service in order to install it from scratch?
<skay> which destroy command should I call? do I need to call multiple ones or will one of them cascade down them for me?
<skay> it's gunicorn and is a subordinate
<skay> destroy-service seems intuitive, but I'd like a sanity check
<tvansteenburgh> skay: yes, that's it
<skay> tvansteenburgh: thanks :thumbs:
<skay> (:thumbs: is actually bd but people mistake it for batman's mask)
<lazyPower> magicaltrout: ack, lemme give it a little longer for manjo
<lazyPower> aisrael - the guy we met from treasure data was Eduardo right?
<aisrael> lazyPower: Yep
 * lazyPower grins
<lazyPower> sam just put me back in touch with him today
<aisrael> Excellent!
<magicaltrout> aye
<lazyPower> mbruzek - if you have a minute, i just sent you a doc with our status update
<lazyPower> can you proof that before i ping the list with it?
<mbruzek> lazyPower: sure
<lazyPower> mbruzek - my mail got rejected due to the attachments, so i'm converting this into ablog post
<mbruzek> lazyPower: I removed some contractions and added a few comments.
<mbruzek> Was my review too late?
<lazyPower> ack
<lazyPower> not at all
<lazyPower> i'll get htem folded in, igotta head to a meeting with weave, but i'll incorp those before publishing
<lazyPower> thanks for taking a look!
<mbruzek> yar
<jcastro> bdx: heya, mind PR'ing your slides or a link to this: https://github.com/juju/presentations
<jcastro> I'm going to start pushing more presentations/talks into that repo
<lazyPower> jcastro - are you exporting our docs slides to like ODP and uploading as well or?
<jcastro> lazyPower: so I have a place to put html slides, and a place for PDFs and other outputs
<jcastro> I figure we can use the readme for links to slides/vids, etc.
<lazyPower> ah that or we can pdf upload
<jcastro> and I am about to make a folder for things like "talk titles and submissions"
<lazyPower> but im' not really a fan of keeping pdf's in git as they are blob and just bloat the repo
<lazyPower> so nvm carrry on sir
<jcastro> instead of "hey chuck mail me the last 10 submissions for devops days you sent in
<lazyPower> omg i love this idea
<lazyPower> who's teh genius i need to hug?
<jcastro> I will put tips and tricks there too
<jcastro> like terms to use, terms to avoid, etc.
<narindergupta> jamespage: sometime back you were working ovs charm with DPDK? will you please point me pointers so that i can give a try with JOID>
<jamespage> narindergupta, I've not started on that yet
<narindergupta> jamespage: oh ok
<narindergupta> jamespage: do you know anyone else did?
<jamespage> no one else has done that yet...
<jamespage> narindergupta, its on the list of things todo still, just not got to it yet...
<narindergupta> jamespage: ok thanks for information
<arosales> aside from 'kill-contoller' any folks have any hints on how to reclaim and juju 2.0 environment?
 * arosales stuck in this loop http://paste.ubuntu.com/15473933/
 * arosales may need to check in #juju-dev
<arosales> fyi fix was to rm ~/.local/share/juju/models/cache.yaml   _if_ you only have the one controller you care about. If you have other controllers you care about then you need to remove the offending lines
<arosales> may need to clean up clean up ~/.local/share/juju/controllers.yaml as well
<arosales> but I am not able to reboot strap again
<arosales> thanks to cherylj
<magicaltrout> thats lazyPower 's favourite fix
<arosales> lazyPower: has a pretty good toolbox
<lazyPower> hi
<lazyPower> what did i do?
<lazyPower> i'm actually working on a python script to nuke a leftover controller, but i dont want to publish it
<arosales> all the things
<lazyPower> because hand editing cache.yaml is frightening
<lazyPower> and i dont want to advocate anyone do this
<arosales> lazyPower: I think it is a bug they should address, if not we'll have the juju clean up script again
<lazyPower> bugs open
<lazyPower> 1 sec let me find the link
<arosales> cherylj: ping me if you need me to open a bug on it.
<arosales> lazyPower: oh is a bug open already on it?
<arosales> lazyPower: I basically hit http://paste.ubuntu.com/15473933/
<lazyPower> https://bugs.launchpad.net/juju-core/+bug/1560191
<mup> Bug #1560191: kill-controller is hinky without a model-controller behind it <juju-core:New> <https://launchpad.net/bugs/1560191>
<cherylj> several, I'm sure
<cherylj> that's technically a different issue
<cherylj> this is the source of arosales' problem:  https://bugs.launchpad.net/juju-core/+bug/1543223
<mup> Bug #1543223: kill-controller fails on missing volume <ci> <juju-release-support> <kill-controller> <juju-core:Triaged> <https://launchpad.net/bugs/1543223>
<cherylj> see also, bug #1555744
<mup> Bug #1555744: kill-controller / destroy-controller prevents reuse of controller name <docteam> <juju-release-support> <juju-core:Invalid by wallyworld> <https://launchpad.net/bugs/1555744>
<arosales> lazyPower: thanks for filling the bug, I added a comment
<lazyPower> cherylj - nice, thats the other side of it that i've hit
<lazyPower> thanks :D
 * cherylj cries seeing arosales' suggestion to rm cache.yaml in a bug
<cherylj> heh
<arosales> ok I posted in both bugs
<arosales> but not 1555744 cause I didn't want to spam all the bug reports :-)
<arosales> cherylj: hey it worked
<arosales> lol :-)
<magicaltrout> you may cry
<arosales> cherylj: I did state "given I only had 1 controller I cared about"
<magicaltrout> i usually remove the whole .juju folder :P
<arosales> but I guess I should have noted that was the nuclear option
 * cherylj weeps uncontrollably 
<cherylj> ;)
<arosales> ah well I don't feel as bad now magicaltrout :-)
<magicaltrout> i happens often enough
<magicaltrout> normally user error :P
<arosales> its commonly pilot error for me an not a bug
<magicaltrout> although i have one production system stuck on 2.0alpha1
<magicaltrout> which i'm not allowed to break
<magicaltrout> because I have no way of upgrading it \o/
<lazyPower> those are always fun, snowflakes that we create in alpha/beta land
<lazyPower> i've been fighting with myself to not setup any 2.0 beta controllers for running systems due to that very reason
<magicaltrout> yeah its like the bastard child, its 2.0 with 1.2 configuration setup
<lazyPower> but with all the goodness thats in here, its really hard to do it
<lazyPower> *to not
<magicaltrout> you  know when its beer o'clock and you can't drink cause you have a conference call in 20 minutes..........
<magicaltrout> plus you need to write a proposal to build a SQL over JSON interface
<magicaltrout> grr
<marcoceppi> magicaltrout: that's the best time to have a beer
<magicaltrout> hehe
<magicaltrout> it wouldn't be the first time
<marcoceppi> cory_fu: which is better?
<marcoceppi> @when('nginx.available', 'charm-svg.running')
<marcoceppi> @when('nginx.available')
<marcoceppi> @when('charm-svg.running')
<marcoceppi> seperation or as *args ?
<cory_fu> They are equivalent and it's up to your personal aesthetic.
<marcoceppi> cool beans, thanks
<marcoceppi> cory_fu: same for @when_not?
<cory_fu> marcoceppi: The main difference is that the order of args from @when decorators is a little confusing when split.  It basically goes bottom to top, left to right
<marcoceppi> cory_fu: yeah, I've encountered that from patch library
<cory_fu> marcoceppi: All decorators are ANDed together.  The only decorators that do not also AND their args are @when_any and @when_not_all
<marcoceppi> cool
<marcoceppi> I think I'll leave them split for now, it's easier to talk to
<marcoceppi> cory_fu: I'
<cory_fu> I tend to group the ones that are either related or the same for two similar blocks, and split ones that vary between two blocks.  So, if two blocks have shared preconditions, but one is @when('foo') and the other is @when_not('foo'), I will group the shared preconditions @when('bar', 'qux') but split the changing state @when('foo')
<cory_fu> If that makes any sense
<marcoceppi> cory_fu: Ive been thinking about the config stuff, how they're basically using states for events
<marcoceppi> cory_fu: would it be better to standardize that as charms.reactive.emit ? so you could emit('state') instead of set_state and removing it?
<marcoceppi> (where emit would basically say, at end of state execution, remove this state)
<cory_fu> There was a discussion about that on one of the issues or PRs
<cory_fu> So, emit would be shorthand for "set_state(event); hookenv.atexit(remove_state, event)"
<marcoceppi> cory_fu: right now I'm wrestling with update-status hook. I'd like to just make a @when('update-status') or something similar which would be triggered during the update-state hook, but also something I could poke at from methods by just emitting that state
<marcoceppi> cory_fu: basically
<cory_fu> marcoceppi: I tend not to collect status reporting into a single handler, though sometimes it is more useful to do so
<cory_fu> bcsaller: Thoughts on ^
<cory_fu> ?
<cory_fu> marcoceppi: Here was the other comment touching on events vs states: https://github.com/juju-solutions/charms.reactive/issues/44#issuecomment-176278218
<bcsaller> remove at end of hook is only one possible semantic, sometime that same spelling might be intended to mean, remove once processed. If the hook fails either the trigger condition needs to regenerate or the cleanup event was wrong,  IMO it's better to detect state (de)activation and decouple it from hook context
<cory_fu> bcsaller: If the hook fails, it is likely that the states will not be flushed and things will re-run from the initial state on hook retry, though that's not guaranteed
<bcsaller> atexit can make that tricky
<cory_fu> How so?
<cory_fu> atexit isn't run on error
<bcsaller> cory_fu: not on any sys.exit?
 * TheMue listens trying to learn a bit
<TheMue> hi Cory, hi Benjamin
<bcsaller> hi
<TheMue> bcsaller: even after leaving Canonical Juju fascinates me and I advertise it *bg*
<bcsaller> excellent
<marcoceppi> o/ TheMue
<cory_fu> bcsaller: https://github.com/juju-solutions/charms.reactive/blob/master/charms/reactive/__init__.py#L74
<cory_fu> Only if there are no errors and it exits 0
<bcsaller> cory_fu: ahh, thanks, didn't recall we were using our own impl of that
<TheMue> marcoceppi: people around me are always fascinated about its possibilities. my talks in the past have been more about the technology (just core *g*), but now maybe I should place talks about charms
<bcsaller> cory_fu: I still tend to think that trying to couple states to hook invocations isn't needed
<cory_fu> bcsaller: In general, I agree and would go even further to say that it's a bad idea.  But there are also cases where we want to ensure that a given state can be handled by every handler that wants to and is cleaned up afterward.
<cory_fu> That can be tricky to do, because no single handler should remove the state in that case since it might block other handlers from responding.
<cory_fu> The end of the hook context becomes a work-around in that case.  Really what it means is that "all handlers that are going to trigger given the current overall state have done so"
<bcsaller> cory_fu: which should be something we can detect, record and clear automatically
<cory_fu> The only way we can detect it automatically is when the dispatch loop terminates, and the very next thing that is run then is atexit
<cory_fu> bcsaller: Merlijn had another interesting take on it with the idea of having an "event" that handlers only got one "bite at the apple", as it were.  So if their other conditions didn't match when it was emitted, they didn't see it later if their preconditions did change.  That's something we can't really model now at all
<bcsaller> cory_fu: do you have time for a hangout at some point?
<cory_fu> Sure
<falanx> hi, how can we specify juju to deploy 16.04 on MAAS instead of the default 14.04?
<marcoceppi> falanx: that depends on the charm, most of the charms are targeted at 14.04 (trusty) - what are you trying to deploy?
<marcoceppi> stokachu: got a min for a quick review? https://github.com/battlemidget/juju-layer-nginx/pull/4
<falanx> marcoceppi: we are trying to deploy the controller
<marcoceppi> falanx: are you using 1.25 or 2.0?
<falanx> 2.0
<stokachu> marcoceppi: merged, thanks!
<marcoceppi> falanx: you should just be able to do `juju bootstrap --bootstrap-series=xenial`
<stokachu> marcoceppi: what do you think about me moving those layers under the juju-solutions org?
<marcoceppi> stokachu: the nginx one?
<stokachu> and nodejs
<stokachu> to start with
<marcoceppi> I don't have a problem, but they're fine in your namespace
<stokachu> ok thats cool
<marcoceppi> stokachu: I do have some plans to spruce up the nginx layer, but it breaks compat so I'm not sure how to handle this gracefully
<marcoceppi> cory_fu: ^
<stokachu> i thought there were some talk of versioned layers?
<LiftedKilt> falanx, marcoceppi: adding bootstrap-series results in "ERROR cmd supercommand.go:448 failed to bootstrap model: no matching tools available
<LiftedKilt> "
<stokachu> LiftedKilt: --upload-tools
<marcoceppi> ninja'd
<LiftedKilt> stokachu: results in same error
<marcoceppi> LiftedKilt: weird.
<LiftedKilt> I'm running: juju bootstrap juju2 dr --upload-tools --bootstrap-series=xenial --debug
<marcoceppi> LiftedKilt: what does `juju version` say?
<LiftedKilt> marcoceppi: 2.0-beta2-wily-amd64
<marcoceppi> LiftedKilt: can you pastebin `juju list-clouds` ?
<stokachu> ugh i can reproduce
<marcoceppi> huzzah
<marcoceppi> cherylj: is it possible to bootstrap a xenial controller?
<cherylj> marcoceppi: sure is.  What cloud?
<LiftedKilt> cherylj: MAAS
<falanx> a private cloud
<LiftedKilt> MAAS version 1.9.1+bzr4543-0ubuntu1 (wily1), for what it's worth
<cherylj> ah, if you can't get to the streams for tools, you'll need to --upload-tools
<marcoceppi> cherylj: so, even with upload-tools the bootstrap fails
<cherylj> LiftedKilt: are you bootstrapping from an ubuntu machine?
<cherylj> you may not have an up-to-date distro info
<LiftedKilt> cherylj: from a 15.10 machine, yes
<cherylj> which would cause that error
<cherylj> LiftedKilt: do you have Xenial in /usr/share/distro-info/ubuntu.csv?
<LiftedKilt> 16.04 LTS,Xenial Xerus,xenial,2015-10-22,2016-04-21,2021-04-21
<cherylj> LiftedKilt: can you send me a paste of the bootstrap --upload-tools --debug?
<LiftedKilt> marcoceppi, cherylj: actually it fails with the same error for bootstrap-series=wily as well
<LiftedKilt> cherylj: sure
<cherylj> ah, there's some weird behavior with bootstrap-series
<cherylj> LiftedKilt: try default-series=xenial
<LiftedKilt> http://pastebin.com/L7ziCLJx
<LiftedKilt> cherylj: I thought default series was for charms?
<cherylj> --config default-series=xenial
<cherylj> LiftedKilt: it will also be used when adding machines
<LiftedKilt> cherylj: perfect - it's bootstrapping now
<cherylj> yay!
<cherylj> LiftedKilt: there are already bugs open about the bootstrap-series not working.  I can find them if you'd like
<LiftedKilt> cherylj: no that's fine - as long as I can get around it I'm happy
<LiftedKilt> cherylj, marcoceppi thanks for the assistance!
<marcoceppi> thanks cherylj
<cherylj> anytime!
<cory_fu> marcoceppi: Sorry, was on a call.  Were you asking me about how we deal with breaking changes in layers?  You were one of the opponents to versioned layers, so you tell me.  ;)
<marcoceppi> cory_fu: I was just supporting your rederict ;)
<cory_fu> TBH, I'm still on the fence wrt versioning for base or interface layers.
<marcoceppi> tbh, I think it's up to the charm author to deal with it, but maybe supporting "major" revision, or backward incompat versions, might be a compromise
<marcoceppi> where each version is basically an epoch
<hatch> can I specify the lxc profile to use when using the juju 2 lxd provider?
<cory_fu> marcoceppi: For interface layers, there's also the difficulty that we now have two things that could potentially be versioned: the interface protocol and the interface layer's API.  Now that we have interface layers, it's actually less of a concern for the protocol to change, as long as the layer manages the complexity of maintaining backwards compat. but it's a huge deal if the layer API changes in a breaking way
<marcoceppi> cory_fu: right, given the simplicity of the interface (key val comm) backwards compat isn't nearly as complex as a code change breakage
<cory_fu> If we're talking about doing "epoch" versions, then each epoch version change is something like a fork
<marcoceppi> cory_fu: more or less, yes
<c0s> cory_fu: with all that you just said wrt protocols, interfaces, and versions - how you envision to communicate underlying components' APIs to the client software
<c0s> say, if juju bundle include component_7 (where's 7 is next revision) then how would I know that my software will work with it?
<c0s> it == the component that component_7 charm represents
<c0s> am I making any sense?
<cory_fu> c0s: So, I'm not sure I understand your question, but in particular, bundles don't contain layers (what I assume you mean by components), they just contain charms.  Charms are built from layers, and some of those layers (interface layers) are responsible for manging the communication protocol of interfaces and providing a defined, documented API to charms.
<cory_fu> c0s: But, also, layers are not combined or updated at deploy time, only at build time (which could be thought of as the "compile" phase for the charm)
<c0s> I guess I am looking a bit deeper into the relation between the layer (sorry for mis-using the terminology) and the actual software the layer is deploying/managing
<cory_fu> So, regardless of whether those layers are versioned or just updated at every build, it's the job of the charm author to verify that the newly built charm works as expected, and is why we want bundles (and sometimes charms) to include tests
<c0s> right, that's how you produce the stack
<c0s> now, when I (as a user) deploy your stack (or bundle in the juju-speak) - how do I know that my hdfs app relying on 2.2.0 API version will work. Does the version of underlying component (software) get exposed somehow?
<cory_fu> c0s: You would know that it works because it should be tested before it is published to the charm store
<c0s> say, if I deploy using packages I can do apt show package-name and get some info
<c0s> wait, this is my client application. You can not possibly claim that you test _all_ applications out there
<c0s> I am not questioning the integrity of the bundle
<magicaltrout> charms that are in the recommended namespace have been tested and validated
<magicaltrout> so if you install a charm not in the recommended namespace YMMV
<urulama|afk> just a note to all people using direct publishing. charm store on production was updated and you'll need new charm command that marcoceppi will make ready soon
<c0s> I think we are still talking past each other. I guess that's my English as a second languge
<cory_fu> c0s: I think there's some disconnect on what you and I mean by API
<c0s> yeha
<c0s> yeah, I am sure. By API I mean the interfaces of the software you're deploying with charms. I am not talking about juju interfaces
<c0s> say, HDFS open() API
<cory_fu> There is the API that the layers use to work together (basically, the states that they set using set_state and watch using @when, etc)
<c0s> or better yet - truncate, which exists in some versions of HDFS and not others.
<c0s> and if I write software to work with an HDFS cluster it will fail if I am calling truncate which isn't there
<cory_fu> Ok, so in that case, the HDFS charm would be responsible for making sure it only installs a version of HDFS that it knows how to work with.  And the big data charms report the version of Hadoop they install to ensure they all have the same version
<c0s> just for the sake of argument: I know this call exists in HDFS x.y.z. If I deploy apache-core-batching bundle - is it easy for me to figure out what version of HDFS is coming with it?
<c0s> ok, good - now we are on the same page.
<c0s> So, charms have a way to communicate the versions of underlying software (components). Good
<c0s> let's get back to the original discussion of how charms should be versioned, shall we? ;)
<cory_fu> c0s: Well, they can communicate the version to other charms that connect to them, but it's up to the charm author (well, actually, the interface author) to make that a part of the interface protocol
<c0s> but what about the user? Will he has any way of finding out the version?
<c0s> That's all I am trying to figure out
<c0s> in other words: is there an analog of apt show packagename ?
<cory_fu> The version might be a config option on the charm, or it might be hard-coded in the charm and documented in the README
<c0s> ok, so I need to look into a particular charm README to find what versions it packs, right?
<cory_fu> Generally, yes.
<c0s> ok, got it.
<c0s> now the last one, I promise - currently, bundles aren't versioned, per se, right?
<c0s> although layers (or charms) are.
<cory_fu> Most charms are hard coded to install a specific version of the software, though many do allow you to specify either a version or a source URL for the software so that you can control the version to some degree
<c0s> ok, got it
<cory_fu> Bundles and charms have revisions, and bundles can specify what revision of each charm they deploy (which is required for recommended bundles)
<c0s> thanks
<c0s> yet again ;) - revisions aren't directly visible to the user, right?
<cory_fu> Bundles can also specify config for charms, so if the software version is configurable in a charm, the bundle can specify that
<c0s> ok, thanks
<cory_fu> Revisions are visible to the user, yes.  https://jujucharms.com/u/bigdata-dev/apache-hadoop-namenode/trusty/4 is revision 4 of that charm
<cory_fu> aka cs:~bigdata-dev/trusty/apache-hadoop-namenode-4
<cory_fu> You can also leave the revision off to get the latest
<c0s> cool, thanks
<c0s> I guess you guys are going to hate me soon
<c0s> ;)
<c0s> In the current design if I want to take an advantage of existing 3rd party deployment code (eg. a bigtop puppet recipe), I would have to wrap it into Python reactive script, right?
<magicaltrout> c0s: lazypower did a cool talk about juju leveraging ansible
<magicaltrout> which could be repurposed for puppet or chef
<magicaltrout> https://www.youtube.com/watch?v=0eymk93lY8k
<c0s> as in "I can write ansible scripts which Juju be able to reuse for the deployment" or something else?
<magicaltrout> yeah, as in, I have ansible already but want to leverage juju's deployment capabilities so make a few changes and deploy my ansible code via juju
<magicaltrout> or something like that
<c0s> make sense, thanks for the link
<cory_fu> c0s: You can also do reactive handlers in bash, but you still do end up having to call out to your existing cfg mgmt tool from a handler, yes
<c0s> yup, that's what I thought. Thanks!
<falanx> Why does the lxd charm only have options for lvm and btrfs?  Wasn't zfs also supported?
<LiftedKilt> jamespage, falanx: What would it take to enable zfs support in the lxd charm for xenial?
<marcoceppi> tvansteenburgh: I've done some long needed triage in the charm-tools repo, cleaned up the milestones, and make sure things were assigned: https://github.com/juju/charm-tools/milestones unless you're going to be working anymore on these tonight I'm going to cut a 2.0.0 since the new charm command is ready
<lazyPower> marcoceppi - yeah man you filled me up with notices :D
<lazyPower> but look at all that progress!
<blahdeblah> Hi all; any charmers able to look at http://juju-ci.vapour.ws:8080/job/charm-bundle-test-aws/3277/ and http://juju-ci.vapour.ws:8080/job/charm-bundle-test-aws/3278/ and confirm that this is just broken test infrastructure?  I can't see anything in the logs which indicates it's a problem with my MP.
<blahdeblah> *MPs
<lazyPower> blahdeblah - yeah looks like the security group cleanup at the testrun start is what caused that
<lazyPower> the output looks fine on 3278 otherwise
<lazyPower> boto.exception.EC2ResponseError: EC2ResponseError: 400 Bad Request
<blahdeblah> lazyPower: thanks - anything I need to do to make sure the MPs don't end up in limbo?
<lazyPower> blahdeblah - i'll poke someone to take a look at this tomorrow and follow up - but it LGTM otherwise, i see your bundle stood up and got 1 test passed.
<lazyPower> is it listed on review.juju.solutions?
<blahdeblah> yep
<lazyPower> i dont see anything from you in there :/
<blahdeblah> 2nd & 3rd from the bottom
<lazyPower> oh probably not sharing irc handle w/ launchpad
<lazyPower> derp
<lazyPower> Yeah, so long as they are in there it'll get reviewed. Typically we kick the ci runs off the morning of our review time to get fresh results
<blahdeblah> cool
<blahdeblah> thanks
<lazyPower> so i wouldn't stress over that initial result :( i can kick it again if you like, maybe its been resolved
<tvansteenburgh> marcoceppi: cool, i won't be getting any more done tonight, release away
<marcoceppi> tvansteenburgh: cool, thanks
<arosales> kwmonroe: cory_fu: I am still not able to deploy realtime-syslog-analytics with juju 2.0 beta2 in aws-east1
 * arosales is going to try a different region
<arosales> http://paste.ubuntu.com/15475911/ is what I see --- stuck in waiting to agent init to finish . . .
<magicaltrout> well arosales if the machine doesn't come up you wont get the agent init to finish
<lazyPower> arosales - have you done this?
<magicaltrout> you sure you're not getting something like the AWS instance upper limit errors?
<lazyPower> arosales 'juju retry-provisioning #'
<lazyPower> default upper limit is 15 iirc, bumpable to 25
<arosales> sorry I was looking at my aws account
<arosales> magicaltrout: ya it feels like an aws limit, but I have bootstrapped this before
<arosales> and my amazon instance doesn't show any limit issues
<arosales> pehraps my sec groups limit is getting close . . .
<arosales> lazyPower: I haven't tried that yet, I was going to see if I had better luck in us-west-2
<arosales> 500 sec group limit, I am at 162 so ok there
<arosales> and 20 instance limit and I have 0 current, so I am ok there too
<magicaltrout> surely juju debug-log gives some clue as to the failure cause?
<arosales> not really
<magicaltrout> excellent! ;)
<lazyPower> arosales i had some issues with local charms earlier this week but i was hard to reliably reproduce
<lazyPower> none of those charms are local right?
<lazyPower> i realize this was at the infra provider level, but, just crossing off the list
<arosales> me still investigating though
 * arosales still investigating though
<arosales> lazyPower: all charm store charms
<lazyPower> mfw using juju 1.25 where juju dhx -s still works
<magicaltrout> semi off topic but there's 2 tools ubuntu is missing by default. mosh for those of us moving around and pastebinit to dump charm logs to pastebin without messing around
 * lazyPower falls in love with dhx all over again, after all these weeks
<arosales> magicaltrout: it would be nice to pass to cloud-init via juju a metadata file that includes user tools they like seeing in their environments like the ones you mention
<magicaltrout> yeah arosales thats a cool idea
#juju 2016-03-23
<magicaltrout> whats the release schedule for push to charm store?
<lazyPower> magicaltrout - i propose an idea
<lazyPower> nvm i just myth=busted myself
<c0s> ah... nvm
<lazyPower> o/ c0s
<c0s> makes me cry in pain
<c0s> fortunately, I am not using it ;)
<lazyPower> i have it on good authority you're looking to do some puppet in charm layers?
<c0s> not exactly
<c0s> was rather wondering if this is possible, as all this operational knowledge has been codified a number of times either in puppet or chef
<c0s> at the risk of sounding like a broken mp3 - Bigtop is the case in point ;)
<magicaltrout> i didn't realise mp3's were written to spinning plastic discs
<c0s> yeah I know - I sound like a corner drug peddler ;)
<magicaltrout> thats more like it!
<c0s> that's a quote from Futurama ;)
<c0s> about the mp3
<c0s> dunno if there are fans of that cartoon here
<magicaltrout> as I complained to other Apache members today.... I have 6 1/2 hours of tutorials to write for ApacheCon..... I dont have time to watch tv (or cartoons ;) )
<magicaltrout> lazyPower is ya man though
<lazyPower> well, I can help
<c0s> fortunately for you, that show ended years ago ;)
<lazyPower> i'm not super familiar with puppet, but there have been some charms submitted in puppet already - although they are an Openstack SDN vendor
<lazyPower> so the bindings in there may be helpful for us to look at, and extract for use in reactive charming
<lazyPower> are your bigtop puppet scripts done in heira?
<c0s> that'd be great, but first this community need to make up their mind how they want to cooperate with Bigtop
<c0s> we are using hiera for the configuration, yes
<c0s> but it isn't like "it is done in hiera"
<lazyPower> yeah, i think we can extract some learnings there
<lazyPower> i'll reach out to apuimido and see if he's got 30 minutes or so to riff with me on pain points
<lazyPower> and i'll fire up charms.puppet i guess :)
<c0s> ideally, the knowledge part should stay the same and just be upstreamed
<c0s> that's how OSS works, you know ;)
<magicaltrout> c0s: explain bigtop to me, its like the hadoop reference stack right? so how does that differ to Apache Hadoop or what these charmer guys ship?
<blahdeblah> +1 if you can get something like hiera-eyaml integrated so that we can have a more sensible way of shunting secrets around the place
<c0s> pretty cool. I would be happy to help with that
<c0s> Bigtop is Apache BigData stack
<c0s> with binaries, deployment, integration testing, and all 9 yards
<lazyPower> blahdeblah - we need to speak louder about secret management with juju when we're at sprints. it would be nice to even see a password config type so it at the bare minimum renders a password field in the gui
<c0s> + a framework to develop your own stacks
<lazyPower> actually scratch that - we dont need to speak louder, we do need to be consistent with getting a bug filed for it and hten beating the drum of that bug#
 * lazyPower goes to find it right now
<c0s> hopefully it help magicaltrout
<blahdeblah> lazyPower: I don't know anything about the GUI side; I've never used it.  I just know that we end up having lots of secrets sitting around in places that are not encrypted at rest, and that's not a great look.
<lazyPower> yeah
<lazyPower> i agree
<lazyPower> https://bugs.launchpad.net/juju-core/+bug/1287661
<mup> Bug #1287661: some config options should be considered secret <canonical-is> <config> <juju-core:Triaged> <https://launchpad.net/bugs/1287661>
<magicaltrout> yup thanks c0s. I've swung by bigtop over the years but left non the wiser! clearly i need to do more reading
<c0s> sure, no worries
<c0s> I has developed a lot lately.
<c0s> we have this cool feature where one can deploy docker-based cluster for development/testing purposes with a single command
<c0s> and so on ;)
 * blahdeblah edits that bug report to reference something other than SSL certs, which aren't actually secrets
<lazyPower> :D
<c0s> and being a reference implementation is really becomes a focal point were the standard for the Hadoop eco-system is getting created.
<c0s> like that www.odpi.org which I am helping to get off the ground now
<c0s> anyway, switching off - see yall later.
<lazyPower> cheers o/
<lazyPower> thanks for catching up c0s
<magicaltrout> ah yeah i recall that bit. cool.
<magicaltrout> Adios c0s
<c0s> cheers
<blahdeblah> lazyPower: I think there's also the whole issue of how they're stored, not just how they're displayed.  To achieve repeatable deploys, we currently need to keep secrets files hanging around on the deployment server, which is also not desirable.
<lazyPower> blahdeblah - right and projects like vault exist for this reason
<blahdeblah> Haven't played with vault myself, but yeah; we need something like that
 * lazyPower nods
<lazyPower> i'm in the exact same boat
<lazyPower> i keep thinking i'm going to deploy it and poke at it, but haven't found the time yet
<arosales> magicaltrout: re push marcoceppi is working on getting the latest version into Ubuntu 16.04 as we speak. Post that I think he is going to announce to the list how folks can try the beta
<marcoceppi> arosales magicaltrout tldr: tomorrow - https://lists.ubuntu.com/archives/juju/2016-March/006910.html
<arosales> marcoceppi: thanks
<lazyPower> marcoceppi - considering the store breaks from DVCS then - i guess i should round up the scripts and give this one more go eh? https://youtu.be/rz5DVte0SKg
<arosales> marcoceppi: does the charm command allow for setting an upstream url for a bundle?
<marcoceppi> arosales: probably?
<marcoceppi> my guess is yes
<arosales> marcoceppi: I may confirm that with urulama|afk
<arosales> marcoceppi: thanks
<lazyPower> arosales - output of charm set shows it as true for a charm, unclear on bundle
<lazyPower> but i would imagine it has the same meta properties as charms
<lazyPower> marcoceppi - does this mean tomorrow i need to gut the PPA from devel flavor?
<marcoceppi> lazyPower: devel flavor of?
<lazyPower> charmbox
<arosales> ya I was just wondering if someone wanted to fork or contribute to a bundle how would they do so
<marcoceppi> lazyPower: does it have juju/devel in there?
<lazyPower> it does, but it also has ppa:marcoceppi
<marcoceppi> lazyPower: juju/devel will be a higher version than ppa:marcoceppi
<lazyPower> perfect
<lazyPower> no rush then
<marcoceppi> ~beta vs ~rc
<marcoceppi> then when xenial lands I'll backport into ppa:juju/stable
<lazyPower> solid, i just dont want stale cruft hanging around as :devel is going to supplant :latest when we release 2.0
<lazyPower> and :latest is going to move to a 1.25 tag, frozen in time
<marcoceppi> lazyPower: ack
<marcoceppi> lazyPower: going forward, esp for charm-tools, I want to kind of build a release schedule and standardized ppa setup for it
<lazyPower> i'm for this :+1:
<marcoceppi> lazyPower: yeah, we're going to fall in line with the core 6 month cycle
<lazyPower> part of what i'm doing now is trying to bring the docker images in alignment with our other tooling so its on a stable build cycle too, and i want to get better docs around its uses. its a pretty versatile porky little toolbox. Once we get matts lxd image and our new vagrant scripts in alignment with this effort we'll have a nice pipeline for everything
<lazyPower> all based out of the groundwork we have in the old vagrant image, now in charmbox, moving along to the other projects :)
<marcoceppi> lazyPower: you going to be around for a code review in like 30 mins?
<lazyPower> sure
<lazyPower> i gotta take out the garbage, let me go do that and i'll do CR for ya
<arosales> cory_fu: kwmonroe: for what ever reason realtime-syslog-analtics works in us-west-2
<magicaltrout> magical reasons
 * thumper likes magic
<lazyPower> arosales - just welcomed cloudguru to the honorary containers-contributors team :)
<lazyPower> \o/
<lazyPower> thumper o/
<lazyPower> fancy seeing you here
<thumper> I'm always here
<thumper> just not always talkative
<lazyPower> thats the punchline :D
<mgz> I'd be careful, thumper probably likes trout too
<thumper> not a big fish fan actually
<magicaltrout> \o/
<thumper> trout is too subtle for me
<magicaltrout> unless its rainbow
<magicaltrout> or magical
<thumper> lazyPower: when are we next going to be at the same sprint?
<lazyPower> thumper - i have no clue but i'm totes ready for another coffee adventure
<arosales> nice congrats cloudguru and thanks for the contributions
<lazyPower> thumper are you coming to the next charmer summit?
<thumper> when and where?
<marcoceppi> lazyPower: payloads, what are teh valid values?
<lazyPower> marcoceppi : 1 sec
<lazyPower> thumper: late september, looking like pasadena ca
 * marcoceppi tries to get 93 patched for 2.0
<thumper> hmm...
<marcoceppi> thumper: pasadena, ca mid-sept
<thumper> sounds interesting
<thumper> how mid?
<thumper> kiwipycon is 9-11 sep
<marcoceppi> like 12-14 tentatively
<thumper> \o/
<thumper> hmm...
<thumper> it's possible then
<lazyPower> marcoceppi type docker type kvm is all that i'm aware of, but i'm pretty sure it was arbitrary
<thumper> how does one get to pasadena?
<thumper> lax?
<marcoceppi> thumper: basically
<thumper> or sfo?
<marcoceppi> thumper: lax for sure, for international
<thumper> is it a drive or flight from LAX?
<marcoceppi> thumper: drive, 20 mins or so
<thumper> hmm...
<marcoceppi> lazyPower: I'll just have kvm and docker for now, I think 2.0.1 will fill the gaps
<thumper> I'd really like to come...
<lazyPower> ack
<thumper> perhaps we could even fix python-django charm to suck less
<thumper> like rewrite it or something
<lazyPower> thumper - i hear this thing called layers is all the rage
<thumper> yeah, I've had no time to play
<marcoceppi> thumper: I fixed the python-django charm, by making a django layer
<lazyPower> :)
 * thumper knows nothing
<thumper> marcoceppi: does it use virtual envs?
<marcoceppi> thumper: if you want it to, yes
<thumper> marcoceppi: and can you upgrade the django version?
<marcoceppi> thumper: yes
<thumper> python 3?
<marcoceppi> thumper: ootb
<thumper> sounds like I want that magic
<marcoceppi> thumper: sounds like you want layers
<thumper> fuck
 * marcoceppi makes rainbow magic hand wave
 * thumper needs more hours
<thumper> what about nginx?
<thumper> and gunicorn?
<marcoceppi> thumper: duh, thta's another layer, and they work together, like magic
<thumper> for django
<marcoceppi> ootb
<marcoceppi> thumper: here's a stupid simple example of implementing a django site as layers with nginx
<thumper> so instead of having multiple charms, we just have one?
<marcoceppi> thumper: no, we have layers
<marcoceppi> and you assemble them to build a charm for your django application
<marcoceppi> but you reuse the operational components that comprise that solution
<thumper> right, but you assemble it into a charm right?
<marcoceppi> yes
<thumper> so instead of python-django, gunicorn, and my subordinate payload charm
<marcoceppi> you just deploy "your charm"
<thumper> I add my payload into a layer, and use django layer
<marcoceppi> and it's django, gunicorn, nginx, as a charm
<thumper> right
<thumper> much nicer
<marcoceppi> very much
<marcoceppi> the glue code is pretty straight forward to
<marcoceppi> thumper: http://bazaar.launchpad.net/~ubucon-site-developers/ubucon-site/ubucon-layer/view/head:/reactive/ubucon.py
<marcoceppi> that's the basically all it takes to get ubucon.org deployed
<thumper> unfortunately I have so little time, it's negative right now
<marcoceppi> thumper: I know. I feel it man
<marcoceppi> but soon, one day, probably - maybe ;)
<lazyPower> thumper - i had no idea the days we were spending crossover hours hacking on your project, that it was literally the only time you were ever going to have to hang out
<lazyPower> RIP fun
<thumper> yeah...
<lazyPower> hey man, it was fun while it was a thing :D
<thumper> model migrations won't be fully functional for 2.0\
<thumper> and I've just been pulled onto MAAS 2.0 support
<lazyPower> eventually you'll come back for the layers, thats how we get ya :D
<lazyPower> until then i'll keep a pot of coffee on for ya and the light on the porch.
<lazyPower> mind magicaltrout on your way in
<marcoceppi> lazyPower: just finishing up unit tests
<lazyPower> solid, still g2g whenever you are
<marcoceppi> lazyPower: https://github.com/juju/charm-tools/pull/150
<marcoceppi> lazyPower: should clear travis in a min
<lazyPower> Dude, your test skills
<lazyPower> they have like, level ++++++
<marcoceppi> lazyPower: yeah, my copy and paste skills are on the up and up (shamelessly ripped from whoever - cory I think - made the storage tests for metadata yaml)
<lazyPower> I thought it looked suspiciously efficient
<lazyPower> +1 LGTM
<lazyPower> can i click the button?
<lazyPower> Because i've waited very patiently for this one \o/ and you just made my day :D
<marcoceppi> lazyPower: go for it
<marcoceppi> lazyPower: 2.0 is 100% complete. Time to cut a release
<marcoceppi> lazyPower: you still around?
<lazyPower> You betchya
<marcoceppi> lazyPower: check out ppa:juju/devel and update/upgrade ;)
<lazyPower> on it!
<jamespage> falanx, that's being worked on atm
<jamespage> zfs on wily was not as accessible
<jamespage> gnuoy, charm-tools 2.0.0 just broke our gate for charms...
<gnuoy> ah, that's why your mps failed
<gnuoy> I did wonder
<jamespage> gnuoy, basically the fix is to switch from "charm proof" -> "charm-proof"
<jamespage> gnuoy, I'll raise reviews now...
<jamespage> gnuoy, as nothing can get past verification atm
<gnuoy> kk, ta
<marcoceppi> jamespage gnuoy if it's easier to change the dep, 1.11.2 is still on pypi if you charm-tools==1.11.2 to avoid 2.0 atm
<jamespage> marcoceppi, meh
<jamespage> marcoceppi, its the same work either way
<marcoceppi> ack, again, sorry about that
<jamespage> gnuoy, generating reviews now - https://review.openstack.org/#/q/status:open+branch:master+topic:charm-tools-2.0
<gnuoy> excellent
<jamespage> gnuoy, same change across all charms - the only one I could not do was lxd - rockstar - you'll need to pull in the same fix to your in-flight review
<jamespage> marcoceppi, oh great - theblues has a == on requests 2.6.0
<jamespage> which is creating problems on OSCI verification, but not in upstream
<jamespage> that's odd
<gnuoy> jamespage, is it right that hacluster is not part of our charm collection on github?
<jamespage> gnuoy, yes
<gnuoy> ack, ta
<jamespage> gnuoy, I'm having to pin requests to 2.6.0 in the charm test-requirements.txt to avoid pep8 lint failures in OSCI for now
<jamespage> we can drop that later - just want to unblock the gate right now
<gnuoy> ok
<jamespage> gnuoy, i think the version of pip and pkg_resources that the osci lab uses is different to upstream and to my local xenial install which causes some issues with entry point loading without that
<jamespage> gnuoy, ah crap - I managed to re-create the reviews...
<jamespage> gnuoy, I was using a pre-canned commit message - misses the ID's that git review added first time around
<jamespage> gnuoy, I'm making alot of load of OSCI but generally the lint failures I saw first time round have gone with the version pin
<gnuoy> kk
<jamespage> gnuoy, anything with a +1 post 10:00 in the following list is good to land IMHO - https://review.openstack.org/#/q/status:open+branch:master+topic:charm-tools-2.0
<marcoceppi> jamespage: yeah, that's what this is about
<jamespage> marcoceppi, the fixed requests version in theblues?
<marcoceppi> jamespage: yeah, see pm
<jamespage> gnuoy, time to unblock the gate?
<jamespage> https://review.openstack.org/#/q/status:open+branch:master+topic:charm-tools-2.0
<jamespage> two failed due to amulet problems - but I suspect they have never passed...
<jamespage> beisner, ^^
<gnuoy> jamespage, ack, looking
<beisner> dosaboy, ack re: lp bugs going directly to fix-released via upstream.  aiui, it's a behavior change and by design.  ref:
<beisner> https://review.openstack.org/#/c/248922/
<beisner> http://lists.openstack.org/pipermail/openstack-dev/2015-November/080288.html
<beisner> jamespage, gnuoy -  unless i've missed one, the osci tests are all calling tox -e pep8 which i think should give you the same experience anywhere, shouldn't it?
<jamespage> beisner, well apparently not
<jamespage> beisner, as the upstream verification passed, but OSCI bailed on the fixed requests version in theblues....
<beisner> jamespage, if the tox ini is set to also use system packages, then definitely not.  some of them i believe are.
<jamespage> beisner, openstack add in new pkg-resources and stuff which i suspect we don't
<beisner> jamespage, ah right.  we're just doing whatever the tox.ini and *-requirements.txt files  instruct
<beisner> jamespage, ack re: cinder-backup amulet not passing.  can't raise a bug, but added a card for triage.
<jamespage> beisner, I suspec the tox/pip/pkg-resource environment on the upstream ci is not the same as trusty...
<beisner> jamespage, right.  imho we shouldn't be leaning on site-packages at all on unit/lint tests.
<jamespage> beisner, tbh its probably not - but depending on which tox/pip you use determines which versions of stuff you get in the venv
<beisner> jamespage, lovely.  so, build a venv, install a specific version of tox into it, then call tox?
<jamespage> beisner, not quite
<beisner> jamespage, ack i see you pinned requests.  looking at other upstream projects, they pin requests to a range of versions as well.
<jamespage> beisner, that should be droppable once theblues sorts things out
<jamespage> beisner, marcoceppi is dealing with that
<marcoceppi> jamespage: https://github.com/juju/theblues/pull/20 and http://paste.ubuntu.com/15477938/
<jamespage> yah gotcha
<marcoceppi> jamespage: is that enough?
<jamespage> marcoceppi, I commented generally on your pull request...
<marcoceppi> jamespage: I missed that, thanks
<marcoceppi> some balked at the idea of using a range
<marcoceppi> and this is why I welcome, snappy, our new packaging overlord
<jamespage> gnuoy, ok for me to +1 workflow those two rollup config changes?
<jamespage> they are passing pep8 ok now with a rebase...
<gnuoy> jamespage, yep, please do
<jamespage> gnuoy, that feels nice...
<jamespage> I like ditching lots of old code...
<gnuoy> tinwood, would you mind taking a look at https://code.launchpad.net/~gnuoy/charms/trusty/hacluster/pause-resume/+merge/289911 if/when you have a moment?
<tinwood> gnuoy, sure.  I'll do it this afternoon.
<gnuoy> ta
<tinwood> gnuoy, is hacluster not on github then?
<gnuoy> tinwood, apparently not, I was surprised too
<tinwood> gnuoy, how strange. Oh well.
<tinwood> gnuoy, this ceph-mon charm is quite awkward.  I think I've got it right now - just doing the amulet tests on my bastion to see if pause/resume actually works!
<gnuoy> tinwood, hacluster as odd ball too
<gnuoy> beisner, I accidentally posted a charm recheck full on https://review.openstack.org/#/c/294554/ after already having a successful run and just my luck the second run failed due to a deploy timeout which I assume is unrelated to my change. Do I have to do another charm-recheck-full to get rid of the canonical ci -1 do you think?
<falanx> Is there a best practice for filesystems on lxd/juju?  There are many articles saying zfs is where the party is, but nothing definitive stating what has the best support vs performance.
<rick_h_> falanx: yes, running lxd-init on xenial I think will ask about zfs as an option and that will help with the best setup for speed.
<rick_h_> falanx: I think jcastro has done some testing/etc and has a sweet setup
<beisner> hi gnuoy, i'll have a look and holler back shortly
<gnuoy> beisner, thanks much appreciated
<jcastro> falanx: I've been pretty happy with lxd/zfs/juju
<beisner> gnuoy, yes we can ignore the results of that recheck.  it was actually a nova-compute failure on xenial due to libvirt/locale bug 1560939.
<mup> Bug #1560939: libvirt-bin fails to install on a fresh xenial server <amd64> <apport-bug> <ec2-images> <uosci> <xenial> <libvirt (Ubuntu):New> <https://launchpad.net/bugs/1560939>
<beisner> gnuoy, shall i push that now?
<falanx> jcastro rick_h_ thanks!  we're looking to do that combo + openstack in production asap =)
<jcastro> falanx: heh I should note that work laptop filesystem and your production openstack architecture filesystem recommendations are probably different. :)
<beisner> rockstar, ok although tests did pass a day ago, there has been some movement in charm-tools requirements and tests now fail.  all of the charms except lxd have been updated to satisfy and work around that issue.  your change will need one more patchset.
<beisner> rockstar, here is an example of what needs to change:  https://review.openstack.org/#/c/296312/   ... and that should let us merge this puppy.
<rockstar> beisner: a re-sync should do it, right?
<rockstar> Oh, no, no it won't. :)
<beisner> rockstar, i don't see that lxd has received those fixes at master
<rockstar> On it.
<beisner> coolio
<tinwood> cholcombe, I wonder if I could ask you a question about what pause-health does in the ceph-mon charm?
<balloons> jcastro, is there an up to date guide on using juju2 with lxd?
<jcastro> balloons: https://jujucharms.com/docs/devel/config-LXD
<jcastro> balloons: oh, this reminded me I need to PR and fix a few bits there
<balloons> jcastro, I have some thoughts on that page, lol
<balloons> but if that is it, I'm not seeing success. It seems like it's missing some bits, and I'm left confused on a few points
<jcastro> the lxd-images stuff is all out of date
<jcastro> all you should need to do is `sudo lxd init`
<jcastro> balloons: to be fair though, all of this was in flux until the other day
<jcastro> alexisb: do we know if juju still wants "ubuntu-trusty" as an alias for a container or will it just use the automagic remotes LXD provides?
<balloons> I have to use the specific alias name?
<pmatulis> balloons, jcastro: guys, that page was heavily refreshed yesterday. i can't remove lxd-images b/c you can't use Xenial Juju machines without it
<rick_h_> jcastro: she's out, I think it looks for that specific alias so that it doesn't stomp on the existing lxd use
<balloons> jcastro, I guess I'd love to see your PR, before I start firing off bugs
<jcastro> you used to have to, not sure if it still does
<jcastro> pmatulis: oh ok so we still need that command then?
<jcastro> I think that's a bug then? The intent was to just use what LXD provides without the user having to know that command.
<pmatulis> balloons, jcastro: unless the user is smart enough to know they need to create an alias for the Xenial image
<pmatulis> beta3 is supposed to create the alias automatically
<jcastro> ah ok
<jcastro> so it's still in flux then
<pmatulis> is there anything else with that page that is wrong or can be extended? lemme know or open an issue
<mgz> rick_h_: we don't care too much if juju2 is in the xenial beta2 image right?
<jcastro> all we need after that is for bootstrap to work without needing --upload-tools and I'm good to go
<jcastro> balloons: which part are you stuck on?
<jcastro> mgz: we're having a hard time getting in also for charm-tools, so we said nope and going to try as soon as beta2 is out.
<jcastro> all the bugs are filed, we're just waiting in line
<pmatulis> anyone know why i would not have juju2 sub-command completion on a fresh Xenial openstack instance?
<pmatulis> scratch that, seems a reboot fixed it
<jcastro> pmatulis: I am wondering if keeping around the "changes from lxc local provider" in that page just adds length for something most people won't care about
<balloons> jcastro, I can't juju bootstrap. And the instructions left me questioning if just installing juju got me the lxd provider or not
<balloons> also, do I need an environments.yaml file? How do I create a controller for lxd?
<pmatulis> jcastro: yes, that's debatable
<jcastro> https://jujucharms.com/docs/devel/controllers-creating
<jcastro> you do not need an environments.yaml
<pmatulis> balloons: you're not reading the page :D
<jcastro> after you have it installed all you should need to do is `juju bootstrap blah lxd --upload-tools
<jcastro> the style of the docs are not condusive to "ok you've finished this page, go to this page next" because there's no pagination
<jcastro> perhaps once we have the final changes in from the tool we should consider just doing a quick and dirty summary up top that gets a controller fired up
<jcastro> and then while that is happening the user can read all the details below
<pmatulis> jcastro: maybe you're looking for a tutorial or yet another getting-started page?
<pmatulis> jcastro: Juju is complex and it's very difficult to have a running story that fits everyone's purpose
<jcastro> I can't fix the get-started page until 2.0 is out though
<jcastro> and it's not difficult, we're just telling people things in the wrong order
<pmatulis> jcastro: there are plans for a much-needed "architectural overview" page replete with diagrams and core concepts, evern arrows but there is so much to do
<jcastro> yeah it's just like, the instructions have CLI commands in them
<jcastro> and then at the end of the section it's like "xenial comes with lxd by default"
<pmatulis> what's wrong with saying that?
<jcastro> actually, let me just ask luca if we can start on the 16.04 version of the get-started page
<jcastro> well, all the commands above that are explaining how to install and configure lxd
<pmatulis> install on Wily maybe
<pmatulis> and LXD does not require any configuration, unless you want to use network stuff or ZFS
<pmatulis> maybe if you give me a specific example
<jcastro> I'm just saying we can trim most of that entire section out and make firing up the controller more upfront
<jcastro> like a TLDR version
<pmatulis> well, these are the definitive/upstream docs for Juju. everything cannot be a TLDR
<pmatulis> a TLDR/getting-started/quickstart should be separate
<balloons> well I had to not use lxd-images (that fails), and then starting a container mannually failed. So not really juju's fault I guess. But the bootstrap command didn't recognize lxd as a provider for instance
<balloons> anyways, are you planning to swap the 'getting started' guide to using lxd for 2.0?
<pmatulis> balloons: dunno what's going on over there. sounds like LXD itself is not working
<jcastro> I don't think we can swap it for LXD on get-started, the get-started is supposed to work on all OSes afaict.
<balloons> pmatulis, right, so I'm not blaming juju for that at all. But as I said the bootstrap command didn't work, and I installed juju-local to maybe fix it -- but the page doesn't mention that
<balloons> I know with juju2 it's changing, so if nothing else I can say the page is confusing at the moment
<balloons> and I would like to play with lxd + juju2 locally, however that may be possible
<jcastro> wait, are you trying to use juju1 with lxd?
<pmatulis> balloons: yeah, you should be using Juju 2
<pmatulis> and don't get me started with the getting-started page :)
<balloons> jcastro, I wasn't, but again, those pages make things confusing. Also, I can't get juju2 in juju/stable (or course not), but the page mentions it
<jcastro> yeah
<jcastro> you're in the middle of an awkward transition
<jcastro> the page is being written for the future while the commands are still being written
<jcastro> balloons: all the good information on how to use juju2 is in the release notes and not the docs
<jcastro> like how to list-clouds, the new credentials stuff, etc.
<jcastro> balloons: I can walk you through lxd in a hangout after I'm done with this meeting I'm on now
<magicaltrout> ballons afaik you should be good on the latest beta from the ppa
<magicaltrout> juju bootstrap dev-lxd lxd --debug is about all you need once you have a lxd trusty image
<pmatulis> balloons: yep, i concede there is confusion (no juju2 in stable) but, given the resources we have, it was decided to write for the future. this is a devel branch after all
<pmatulis> (the software and the docs are not released)
<balloons> pmatulis, yes I agree. Write towards what is landing (and there is the header stating as such)
<LiftedKilt>  
<pmatulis> nothing under the kilt?
<LiftedKilt> pmatulis: it was feeling a bit breezy there for a second, that's for sure
<cory_fu> c0s: So, the main problem that admcleod- was talking about earlier when he mentioned multiple namenodes is that there is a bug in Juju leadership in the current stable (1.25) where if you spin up and then tear down two NNs, then spin up two more without re-bootstrapping the entire environment, a leader isn't selected.
<cory_fu> I haven't encountered it, personally, so I can't really give any more detials
<cory_fu> *details
<c0s> cory_fu I am a bit confused: how Juju leader election is related to ZKFC leader election for HDFS HA?
<cory_fu> But from what he was saying, the HDFS HA should be pretty functional
<cory_fu> c0s: He's using juju leadership to ensure that only one NN tries to do the formatting and other logic that apparently should only happen once.  Once it's up and configured to HA, juju leadership is irrelevant
<c0s> cory_fu: oh, that's the juju way of handling the distributed locking. Got it.
<cory_fu> Right
<c0s> cory_fu: by the way: Here's Bigtop puppet recipes on the topic: https://github.com/apache/bigtop/blob/master/bigtop-deploy/puppet/modules/hadoop/manifests/init.pp#L556 and all the way to line 607
<cory_fu> admcleod-: ^ for reference when you sign back on
<c0s> I will add this to the review notes as well. And thanks for the explanation: now I can focus on the HDFS HA part ;)
<cory_fu> c0s: I haven't looked at the HA branches in much detail recently, so I can't speak to it other than what's been explained during the dailies
<cory_fu> Cool.  Thanks
<c0s> cory_fu: speaking of solving the distributed lock issue
<c0s> what if active and standby NNs know about their roles from the beginning?
<c0s> This way, the file system formatting can only be done on the primary code by simply checking against the namenode role.
<cory_fu> How would they know their role?
<c0s> when you deploy the services you can arbitrary chose to be an active and another to be standby
<c0s> here's how we do it
<c0s> Here&rsquo;s the link to Bigtop&rsquo;s puppet recipes to setup HDFS HA http://is.gd/d0cWel  from line 556 all the way to line 607
<c0s> scratch that
<c0s> https://github.com/apache/bigtop/blob/master/bigtop-deploy/puppet/modules/hadoop/manifests/init.pp#L639
<c0s> we always pick the 1st node to carry on as the active and to hell with that ;)
<c0s> this is out choice
<c0s> because the same recipe code is getting executed on all the namenodes, with such check we guarantee "only one" execution
<c0s> cause otherwise to expose the internal HDFS concern up a layer to the orchestrator, and that makes the implementation harder, of course
<cory_fu> So, that's basically what we're doing with the Juju is_leader check.  The "lowest numbered unit" would work and is what charms used before is_leader was available, but is_leader is generally considered better (barring the bug we're seeing)
<cory_fu> In the end, it basically works out to the same thing, and in theory should actually be simpler since it's a simple boolean check (and the Juju controller decides on the leader)
<cory_fu> Also, the bug we're hitting with is_leader really only affects iterative development use.  Normal usage wouldn't be affected by it
<cory_fu> I feel like there was a stronger reason why we moved away from "lowest unit" for this specific charm, as well, but I can't recall off the top of my head
<c0s> ok, that makes sense. Thanks cory_fu for the explanation - learning a bit more each hour ;)
<mux> new blog post about juju posted here: http://blog.emccode.com/2016/03/23/storage-operations-with-juju-charms/
<mux> specifically, exploring what's out there for attachable, detachable, persistent storage for charms
<mux> (spoiler: not much.)
<c0s> cory_fu: do you know if the bug for the leader election has been logged somewhere?
<c0s> I have a feeling that it might be not a bug, but rather a limitation of ZK
<c0s> if you look here https://aphyr.com/posts/291-call-me-maybe-zookeeper all the way to the Recommendations section, it says
<c0s> "Also keep in mind that linearizable state in Zookeeper (such as leader election) does not guarantee the linearizability of a system which uses ZK. For instance, a cluster which uses ZK for leader election might allow multiple nodes to be the leader simultaneously..."
<c0s> And that guy knows what he's talking about ;)
<cory_fu> c0s: So, this doesn't actually have anything to do with ZK, only the Juju is-leader command.  As I understand it, is that Juju's is-leader tool is supposed to return True for at least one unit at all times except for a somewhat brief fail-over period  when the leader unit goes away.  The problem is that Andrew is seeing normal behavior for units 1 and 2, but not 3 and 4 after removing the service entirely and redeploying it
<cory_fu> Again, for a new service coming up, at least one unit (the first to come up) should see is-leader return True even before running any charm code
<c0s> oh, I see. Not I think I got it - was mislead a bit by the presence of ZK from HDFS HA ;)
<cory_fu> I assume there was a bug filed against core, but I'd have to go looking for it
<c0s> no worries
<c0s> so, juju doesn't use ZK under the hood, after all?
<cory_fu> Yeah, there are a couple of orthogonal ideas of "leadership" going on
<c0s> k, thanks!
<cory_fu> No, the controller just picks one of the units as the leader and records that in Mongo.  With the current single controller, something like ZK isn't needed at all.
<cory_fu> c0s: Also, someone in #juju-dev could give a much more accurate explanation of how Juju's leadership code works
<skay_> my services seem to be running but workload state shows as unknown when I run juju status.
<cory_fu> But from a charm's perspective, we're supposed to be able to rely on is-leader telling us if we're leader or not, and (almost) always having a leader
<c0s> cory_fu: thanks, this really helps!
<c0s> the fact of the single controller skipped my mind.
<cory_fu> And of course, we're really only relying on that to ensure that only one unit runs the HA initialization logic.
<magicaltrout> zhttp://www.oreilly.com/programming/free/how-to-make-mistakes-in-python.csp?imm_mid=0e1f49&cmp=em-prog-free-lp-ostx16_nem4_mistakes_in_python
<magicaltrout> its like Oreilly knew I was hacking some python badly......
<lazyPower> Mayyybe someone knows the answer to this... i need to mock/patch a private method on an object. my google fu is failing me on finding the method to do so, any helpful pythonista's in the crowd?
<jrwren> lazyPower: don't do that. :]
<lazyPower> jrwren - just leave the priate method untested?
<lazyPower> *private
<mgz> lazyPower: also, you can just do that
<mgz> private in python is just a convention
<lazyPower> well, thing is - i need to patch it as its kind of the core dispatch method in the class
<mgz> but genrally, you only want to test interface points, so not private helpers
<lazyPower> i guess i could move it elsewhere and just patch it there
<lazyPower> nebulous questions without context
 * lazyPower resumes test hacking
#juju 2016-03-24
<stub> cory_fu: Using lowest-numbered-unit for leader created split brains, as units usually have differing lists of peers visible during setup and choose different leaders. It only stabilises once all the initial peer-relation-joined/changed hooks have run, and destabilises once you start dropping units.
<stub> And that period before joining a peer relation where a unit has no idea if it is alone or about to join other units.
<marcoceppi> lazyPower: still need help?
<lazyPower> marcoceppi nah i wound up moving it and patching it properly
<lazyPower> https://github.com/juju-solutions/charms.docker/pull/13 - moved the __run method to runner.run and things kind of fell in order after that.
<blahdeblah> Anyone able to unbreak http://juju-ci.vapour.ws:8080 ?  Seems like I'm not the only one whose runs are getting internal server errors...
<gnuoy> jamespage, if you get a moment https://code.launchpad.net/~gnuoy/charms/trusty/hacluster/pause-resume
<BlackDex> hello there
<BlackDex> When using manual provider, can i put the bootstrap node on a lxc?
<stub> BlackDex: Are you using Juju 2.0?
<stub> BlackDex: The manual provider allows you to add machines to an existing environment using its IP address.
<stub> BlackDex: With Juju 1.X, you can create an environment using the local provider and add machines too it using the manual provider. The bootstrap node is the local machine, and not in a container.
<stub> BlackDex: With Juju 2.0. you can create an environment using the lxd provider, putting the bootstrap node in a container, and add machines too it using the manual provider.
<stub> BlackDex: You should also be able to bootstrap the manual provider, putting the controller node in an lxc container. You would need to set up the container first and ensure it has network connectivity.
<BlackDex> stub: No, juju 1.25.3 atm
<BlackDex> stub: So if i create a new lxc, wich has network, i can bootstrap it to that machine :)
<BlackDex> that sounds clear
<stub> BlackDex: Aparently yes :) I've only used the manual provider with an OpenStack controller node. But any Ubuntu VM or container you can ssh into should work just fine.
<BlackDex> good to hear
<BlackDex> thx!
<BlackDex> i'm going to try that right now
<shruthima> Hi Kevin, As you suggested we have made all the changes to IBM Installation Manger layer (https://github.com/kwmonroe/layer-ibm-installation-manager)  locally ( in ./reactive/ibm-installation-manager.sh) to make it is a functional layer and we are able to deploy IBM-IM successfully.  But when we are trying to install IBM-WAS on top of IBM-IM layer , IM layer states like 'im.installed' is not recognized in IBM-WAS layer.  Although we h
<shruthima> ['layer:ibm-installation-manager'] Could you please suggest us is there anything to be added to use IBM-IM layer in other products eg:WAS...!!
<cory_fu> stub: Thanks for the reminder.  Our particular use-case, though, is quite a bit more limited than the general case: we only need to worry about two peers, and we only care about the "leader" once during initial startup.  Regardless, is-leader is still the better solution and it's what we're using.
<cory_fu> stub: Also, I apologize for being so far behind on the layer-basic, charms.reactive, etc. reviews.  I'm basically only getting to things if they immediately impact the big data charms and could really use some time to focus on the libs.  (And / or more help reviewing, anyone-who-wants-to-take-a-look. :p)
<BlackDex> how can i change the default ip the bootstrap node listens on?
<shruthima> Hi Kwmonroe, As you suggested we have made all the changes to IBM Installation Manger layer (https://github.com/kwmonroe/layer-ibm-installation-manager)  locally ( in ./reactive/ibm-installation-manager.sh) to make it is a functional layer and we are able to deploy IBM-IM successfully.  But when we are trying to install IBM-WAS on top of IBM-IM layer , IM layer states like 'im.installed' is not recognized in IBM-WAS layer  .
<shruthima> Although we have set LAYER_PATH and  included IBM-IM Layer in layer.yaml....(includes: ['layer:ibm-installation-manager']) Could you please suggest us is there anything to be added to use IBM-IM layer in other products eg:WAS...!!
<marcoceppi> rick_h_: I'm trying to name my charm with an emoji and it's not working. ...is this a bug? juju deploy ./ð© --series trusty
<marcoceppi> ERROR bad charm URL in response: URL has invalid charm or bundle name: "local:trusty/ð©-326"
<rick_h_> marcoceppi: hmm, I'm going to say no...working as planned?
<rick_h_> I don't think we're up on supporting emoji for charm names :P
<marcoceppi> rick_h_: not even a 2.1 stretch goal?
<rick_h_> marcoceppi: I can't tell if you're joking?
<jrwren> iirc we are [a-zA-Z][a-zA-Z0-9]{2,}
<marcoceppi> rick_h_: I've got an emoji domain, and want to name the service appropriately :)
<marcoceppi> http://ð©â.ws
<rick_h_> marcoceppi: wow...so not joking. /me is trying to take that in a bit
<marcoceppi> rick_h_: I was half joking
<marcoceppi> rick_h_: like this whole thing started as a goof
<jrwren> so ãã« is not allowed either.
<rick_h_> marcoceppi: yes, I can tell
<marcoceppi> but you can mkdir ð©
<marcoceppi> and I put that as the metadata name
<rick_h_> marcoceppi: and then Juju lets you down
<marcoceppi> like it all started coming together
<rick_h_> after all that progress
<rick_h_> lol
<marcoceppi> so, the fact that I can get an emoji domain sent me down this road
<marcoceppi> but I thought "why not."
<marcoceppi> we could transpose the emoji code to punycode
<marcoceppi> where ð© is xn--ls8h
<marcoceppi> I'm going to file a bug, because I think it'd be a good stick for talks and such, and everyone uses emjois in their filesystems, right? right. probably not
<marcoceppi> but still
<marcoceppi> amazing https://godoc.org/golang.org/x/net/idna
<marcoceppi> there's already a library
<marcoceppi> actually, this is a terrible idea, isn't?
<jcastro> lazyPower: I know you have talk submissions all in your inbox, so if you could submit those that would be <3, see my mail to the list.
<jcastro> bdx: you too!
<jrwren> i do use emoji and other unicode chars in my filesystem.
<jrwren> I have folder named  ð»
<jrwren> for that matter, I have many directories with spaces in them. We should also support spaces in charm names.  right?  right?
 * rick_h_ gives jrwren the shusher on the head
<jrwren> I picture you pointing at me and yelling "SILENCE YOU!!!"  :]
<aisrael> of course you would, jrwren :p
<jrwren> aisrael: lol. Yes, I live my insanity.
<rick_h_> jrwren: sorry, watched Home with the boy this weekend. I want a 'shusher' now
<jrwren> rick_h_: ah! I still have not seen that one.
<rick_h_> jrwren: oh it's predictable and such, but worth a few laughs
<LiftedKilt> what ppa do I need to grab juju2 from in order to support the new api in maas 2.0?
<rick_h_> LiftedKilt: it's not ready yet. The team is working to update the maas provider for the new MAAS 2.0 api
<rick_h_> LiftedKilt: and it'll take some work to get that work done
<rick_h_> LiftedKilt: when it's there it'll be in the devel juju ppa first
<LiftedKilt> rick_h_: gotcha - so if I want to test with maas 2.0, I need to wait for juju
<jrwren> on a similar topic, I can't create a new launchpad account named ð ð» ð½ ð¤    ;]
<rick_h_> LiftedKilt: at this time if you want to do juju on maas 2.0 you need to wait on juju
<rick_h_> LiftedKilt: you can use maas on it's own and try out 2.0, or try juju 2.0 on maas 1.9
<LiftedKilt> rick_h_: ok thanks - back to mas 1.9 it is! haha
<LiftedKilt> maas*
<rick_h_> LiftedKilt: make sure to follow the juju mailing list. We'll be using it to keep folks up to date as we move it forward
<LiftedKilt> rick_h_: awesome - just signed up.
<cory_fu> kjackal: Before you sign off, I see you put the README review card in Review.  Are there any PRs for that card or is the audit complete and we can move that card to Done?
<kjackal> cory_fu, there are no outstanding PRs for this card. Everything is addressed
<cory_fu> Thanks
<admcleod-> c0s: so im thinking, with upgrading hadoop, that id like a single action against say the namenode, and to propagate the upgrade information to the connected units via interfaces - so maybe a period check of a 'am i supposed to be upgrading?' function for datanodes, for example, although that may cause issues with the 'rolling' nature of the HA upgrade...
<admcleod-> c0s: then the upgrade info (what version to, what version from, etc) would be propagated around the bigdata charms, so wouldnt have to rely on an 'orchestrator'
<LiftedKilt> rick_h_: a couple weeks ago you guys had a great ubuntu on air video. I can't find it on the ubuntuonair youtube channel - is it unlisted?
<LiftedKilt> rick_h_: and have you guys done any others?
<rick_h_> LiftedKilt: next one is tomorrow. jcastro can you help link up LiftedKilt please?
 * rick_h_ is on a phone grabbing lunchables atm
<jcastro> LiftedKilt: sure, do you remember what it was about? was it our office hours or the zfs one?
<jcastro> http://youtube.com/jujucharms is the channel
<jcastro> https://www.youtube.com/watch?v=2jC8217wjTE perhaps?
<LiftedKilt> jcastro: it was the office hours march 2016 - which I found thanks to your link
<LiftedKilt> jcastro: but now that you've linked that video, I'm watching it now haha
<c0s> admcleod-: I think it makes sense. Let me dig a bit into it, cause I still rough at the edges when it comes to Juju
<c0s> thanks for sharing the idea!
<falanx> Is there such a thing as a self-hosted private charm store for internal use?
<bdx> openstack-charmers: Hows it going? Can someone point in the direction of where/how keystone creates endpoints for a service on relation joined hook?
<bdx> openstack-charmers: radosgw for example
<c0s> cory_fu kwmonroe with respect to the upgrades - ie in Hadoop. we have upgrade of the software and the upgrade of the filesystem.
<c0s> The former is pretty common for long live clusters (but we probably aren't concerned much with it, right?)
<c0s> the latter happens less frequently, but seems to be our main objective, is it?
<kwmonroe> c0s: not sure what you mean by 'upgrade of the filesystem'.  fwiw, i thought our main objective was the upgrade of hadoop itself, so the former use case.
<c0s> kwmonroe: HDFS upgrade is in order when the fsimage layout got changed
<kwmonroe> ah, gotcha c0s.  does hdfs change outside of a hadoop version change?
<c0s> basically, if you have Hadoop 2.0.5 and then install Hadoop 2.7.1 - the namenode will refuse to run because the layout is old and you have to go through an upgrade procedure
<c0s> no, it doesn't
<c0s> they are the same release train
<c0s> it is just an extra thing to do if something like this happens
<kwmonroe> ok, so in that case, our objective would be to handle both your use cases.. upgrading hadoop and whatever else needs to be done to support the new version.
<c0s> ok. just to repeat myself kwmonroe - will see the filesystem upgrade only in case of really long-live clusters. Or if someone happens to preserve an old file system and then tries to use it with a newer version of the software. Which is a quite crazy idea, IMO ;)
<beisner> hi bdx - add_service_to_keystone()  and add_endpoint() in the keystone hooks dir is where the mechanics ultimately take place.   so, ceph-radosgw's identity_joined() advertises endpoint values via relation data, then keystone acts on it.
<bdx> beisner: awesome, thanks
<bdx> beisner: so, I need to add the object storage endpoint `http://<ip-or-fqdn>:8080/v1/AUTH_%\(tenant_id\)s`
<bdx> beisner: to do this I would need to add the endpoint to those that radosgw advertises via relation?
<bdx> exactly what I was looking for! thanks!
<lazyPower> marcoceppi tvansteenburgh - bite sized fix on this one - https://code.launchpad.net/~lazypower/juju-deployer/patch-1561689/+merge/290078
<lazyPower> wait thats not... hang on
<lazyPower> yeah nvm sorry for the noise. Its fine in trunk, i needed to bump my deployer
<beisner> hi coreycb, icehouse cloud archive sru for bug 1393391 are technically clear to promote tomorrow, but i prefer not to push pkgs on Fridays.  ok to delay to Monday?
<mup> Bug #1393391: neutron-openvswitch-agent stuck on no queue 'q-agent-notifier-port-update_fanout.. <patch> <sts-sru> <verification-done> <neutron:Confirmed> <neutron (Ubuntu):Invalid> <neutron (Ubuntu Trusty):Fix Released> <https://launchpad.net/bugs/1393391>
<coreycb> beisner, sure that's fine
<beisner> coreycb, ack thx
<coreycb> beisner, thank you!
<beisner> coreycb, yw :)
<kjackal> c0s, thats true, this is why I was thinking that splitting spark in two charms would help
<kjackal> I will have to think a bit more the split of the charm, because such a split would mean that there are more interactions+interfaces
<kjackal> For starters, I want to do the simple/rough restart on all units
<kjackal> then we can iterate over it to tune it
<kjackal> I have already enough unknowns :)
<c0s> I am pretty sure you don't account for some unknown unknowns yet, kjackal
<kjackal> :)
<pmatulis> what is 'test-mode' for a model?
<bdx> hey whats up everyone? How is charm-tools currently being installed in wily?
<bdx> *charm-tools 2.0
<lazyPower> bdx - from ppa:juju/devel (you need both ppas enabled to fetch all the deps)
<bdx> lazyPower: I need stable + devel enabled?
<lazyPower> yep
<bdx> nice!
<bdx> thanks man!
<lazyPower> np
<stormmore> might be the crazy question of the day, is it possible to export a pre-existing environment as a bundle?
<edsiper> Is there a way to speed up the upgrade-charm process ?, it's taking several minutes while testing small changes from the original layer
<lazyPower> stormmore - are you on stable juju?
<lazyPower> regardless - if you have the juju-gui deployed you can export your environment as a bundle, if the gui is a no-go for you for whatever reason, if you're on juju stable, this is an option as well: https://github.com/niedbalski/juju-deployerizer
<stormmore> lazyPower: actually at this point I havenât installed a version in the environment, just finished setting up my MAAS server how I want it with vlans, etc.
<stormmore> lazyPower: I am setting up a lab / poc so I am trying to determine if I am going to have to hand write the bundle file after I have figured out and deployed the machines or if I can export, destroy, redeploy
<lazyPower> edsiper - ah i assume you mean the wheelhouse installation/etc? I'm not aware of any way to speed that up :( sorry
<kwmonroe> kjackal: c0s: on the subject of splitting spark.. how about we add layer:leadership to spark's layer.yaml.  @when leadership.is_leader, set MASTER:PORT on a spark peer relation and start the master.  @when_not is_leader, retrieve MASETER:PORT from a spark peer relation and start the worker.  then it's the same spark charm handling master and worker roles.
<lazyPower> stormmore: well you cant "bundle up" the maas setup bits that i'm aware of, no
<c0s> kwmonroe: it seems a bit too complex, honestly
<c0s> although, you know better the inner-guts of the system, so perhaps this is the simplest solution there is
<lazyPower> stormmore is your maas server in a vm?
<stormmore> no I am fine with that part but can I export all the juju deploy, with all the settings like networks, etc.
<lazyPower> yeah you can export the juju enviornment
<lazyPower> if you're in 2.0 territory, yes the network spaces and etc. come with that bundle
<lazyPower> along with storage concerns you may have modeled as well
<magicaltrout> kwmonroe: isn't that pretty much what I did with PDI?
<magicaltrout> seems to be, if so its not overly complex c0s
<kwmonroe> c0s: the benefit i see is that all spark units could either be a master or a worker depending on the current leader.  when the leader unit dies, another will be auto-selected and will fire up its master process.  with split charms (and HA), you'd have to have a standby master, which seems like a waste of  a machine.
<stormmore> nice :) I was âplayingâ with 1.9 in an early POC that I destroyed to fix the network config. plan to hand build an openstack environment POC but want to be able to tear it down and rebuild from a configured MAAS environment
<kwmonroe> magicaltrout: what you did with PDI was unholy python.
<edsiper> lazyPower, thanks, anyways there is something wrong, as the upgrade is taking a bit long I am trying to remove/destroy the unit, it changed the stated to "dying" but is still there :/
<magicaltrout> aww you make me so sad
<mattyw> cmars, ping
<lazyPower> edsiper - any relations attached to the unit in question that may be in a failed state?
<kwmonroe> :)
<lazyPower> a dependent subordinate perhaps?
<cmars> mattyw, hesitant pong?
<magicaltrout> just because its not as cool as bigtop :P
<c0s> kwmonroe: I am not sure if you can dynamically add a new spark-master
<kwmonroe> lol magicaltrout.. fwiw, yes, it is what you did with pdi (https://github.com/OSBI/layer-pdi/blob/master/reactive/pdi.py#L91).  i was in rabblerouser mode.
<stormmore> lazyPower: you do bring up an interesting side question, can I run the MAAS server inside a VM running in the OpenStack cluster that the MAAS system machines the hardware for? (looking at redeploying the physical MAAS server as another node in my OS cluster. in theory I donât see why not but... )
<c0s> I might be mistaken but this might be a limitation of ZK membership
<lazyPower> stormmore - well my maas setup is a pure vmaas setup, meaning its only ever driving virtual machines so it made sense for me to slap it into kvm, and have it drive kvm vm's
<lazyPower> stormmore - sounds like what you're asking is akin to the same thing... but it depends on how you're looking at russian doll stacking that setup, so
<lazyPower> hard to say off hand, but i'm going to tentatively say "yes"
<kwmonroe> ack c0s.  we'll pick kjackal's brain later to see if he's come across dynamic master stuff.
<c0s> agree kwmonroe. But this is how I read http://spark.apache.org/docs/latest/spark-standalone.html
<c0s> In my practice, I never seen a need for Spark HA, so can not comment authoritatively
<edsiper> lazyPower, no relations, I had to destroy everything
<stormmore> lazyPower: kinda, but instead of managed other VMs I want the vmaas to manage the physical system that the hypervisor (in this case openstack) is running on
<lazyPower> ah no, i dont think thats a good idea
<lazyPower> its not hte same system is it?
<lazyPower> because really your maas server can run anywhere
<lazyPower> so long as it has access ot the network to provide pxe boot instructions
<stormmore> it currently is not, has itâs own massively over powered box to itself
<lazyPower> oh well yeah for sure then :)
<lazyPower> stormmore - i've even been toying around with plugging maas into LXD and having that drive my vm's
<stormmore> I get the risks of running the management system on top of the systems it is management. my vmware days taught me that but in reality, it works fine it a tad riskier than keeping it separate
<lazyPower> so i'm not eating that initial ram/cpu allocation for the maas server itself, which gives me another unit. You can easily carve that machine up in many different ways to get s'more use out of that overpowered beefy rig running maas.
<stormmore> lazyPower: yeah that is another option I am pondering, I really just want to recoup the wasted hardware but it is further down the line project right now
<lazyPower> for sure
<lazyPower> when you get to that step, feel free to ping me
<lazyPower> if i'm around i'll lend a hand to help you get that moving
<stormmore> cool :)
<stormmore> ok back to a more pressing question, what version of juju should I install remembering this is lab environment I am deploying to?
<lazyPower> unless you want to file bugs :) 1.25 is current stable and i would target that until 2.0 lands as -stable next month
<magicaltrout> trunk!
<magicaltrout> trunk all the way!
<magicaltrout> then find you can't upgrade and are stuck on a specific build for all eternity
<stormmore> I am think since it will probably be after 2.0 release that I get this PoC fully up it might be worth the hassles of âplayingâ with an alpha release
<lazyPower> its currently in beta
<lazyPower> fairly stable, some rough edges
<magicaltrout> Trunk! Ignore beta!
<magicaltrout> you lot are all wimps
<lazyPower> indeed :)
<stormmore> hey I am with you magicaltrout, would be that way except I lilke my PoCâs to break is more predictable ways :P
<edsiper> from my layer code (/reactive/...) how can I trigger a shell command ? should I use the common Python subprocess or some specific charmhelpers package.method ?
<lazyPower> edsiper - subprocess is how i would do it
<edsiper> lazyPower, just to clarify, my layer in some routine needs to restart the service (service ... restart), I am not writing hooks as I am doing everything inside reactive, so subprocess is still the way to go ?
<lazyPower> nope, that would use a charmhelpers method
<lazyPower> 1 sec while i grab that for you
<edsiper> thanks
<lazyPower> https://pythonhosted.org/charmhelpers/api/charmhelpers.core.host.html#charmhelpers.core.host.service_restart
<edsiper> great, thanks
<lazyPower> edsiper - theres also a decorator in here to watch files and restart on your behalf - https://pythonhosted.org/charmhelpers/api/charmhelpers.core.host.html#charmhelpers.core.host.restart_on_change
<edsiper> lazyPower, even better :)
<stormmore> does juju 2.0 play nice with maas 1.9?
<lazyPower> It does
<lazyPower> its juju2 / 2.0 that's currently being worked on
<lazyPower> s/2.0/maas 2.0/
<stormmore> I did consider maas 2.0 too but heard they removed the wakeonlan option
<edsiper> any tips to get rid of this message in my debug-log? "machine-5[3853]: 2016-03-24 22:19:15 ERROR juju.worker.diskmanager lsblk.go:116 error checking if "loop0" is in use: open /dev/loop0: operation not permitted
<edsiper> "
<lazyPower> edsiper - i can tcure the message, but you can filter it out - https://jujucharms.com/docs/1.21/troubleshooting#troubleshooting-with-debug-log
<lazyPower> wow 1.21 docs?
<lazyPower> really google?
<lazyPower> https://jujucharms.com/docs/1.25/troubleshooting-logs  - there's the current reference document for -stable
<lazyPower> sorry about the 1.21 link
<edsiper> thanks
<edsiper> lazyPower, what's the write decorator/python method to use when a relation is added to ?
<edsiper> *whats is the right
<lazyPower> depends on the interface layer
<lazyPower> most interface layers raise their own state, so you subscribe to it
<edsiper> lazyPower, I added a require database mongodb , for my charm this is optional, so if it's added I want to be notified
<lazyPower> cmars - the linked repo from interfaces has no issues (As its a fork) - should i just file on your upstream repo for the mongodb interface layer?
<lazyPower> edsiper - so what did you name the relation in your metadata.yaml?
<lazyPower> https://github.com/cloud-green/juju-relation-mongodb/blob/master/requires.py <- is the interface layer. Normally interface layers ship with a readme that give you example usage to consume it
<lazyPower> this seems to be WIP so its not quite gotten there yet
<cmars> lazyPower, use https://github.com/cloud-green/juju-relation-mongodb, that sounds good
<edsiper> lazyPower, I just added the following to my metadata.yaml:
<edsiper> requires:
<edsiper>   database:
<edsiper>       interface: mongodb
<lazyPower> you'll need to ensure youve a) got interface:mongodb in your layer.yaml, and you subscribe to @when('{name you put in metadata.yaml of your relation}.database.available
<lazyPower> so @when('database.database.available')
<lazyPower> cmars - I dont think forks get issues enabled by default, can you enable issues on that repository <3
<edsiper> lazyPower, what's the difference between set the interface in my layer.yaml instead of my metadata.yaml ?
<cmars> lazyPower, oh, i think i've renamed some of those states.. i'm using connected and available in a newer version. i'll turn on issues and then propose a PR for you to review
<lazyPower> sounds good
<lazyPower> edsiper - layer.yaml is a build time construct, metadata.yaml is a runtime construct.
<cmars> issues activate
<lazyPower> edsiper - layer controls what `charm build` pulls in and you can do some extra things in there like define build-time variables so you can define behavior of the output artifact via those settings.
<lazyPower> edsiper - metadata is the only thing ever required to build a charm. it declares to juju what it is, what communications it can participate in, and additional meta about the service itself
<edsiper> lazyPower, got it, thanks
<cmars> lazyPower, https://github.com/cmars/juju-relation-mongodb/pull/3
<cmars> lazyPower, i used 'changed' for the state name though. this was from a ways back
<lazyPower> cmars - i dont know that scopes.GLOBAL is the correct scope to be using for this
<lazyPower> do you want every unit from x service to get the same db info? or do you even bother with that as mongodb just hands out ip/port?
<cmars> lazyPower, that could be better
<lazyPower> ok, and final question
<lazyPower> you removed the initial state that was in this relation, is anyone currently using interface:mongodb?
<lazyPower> if so, you just broke it for all those charms
<lazyPower> i ran into this myself :(
<lazyPower> so its kind of a fresh papercut to watch for
<cmars> lazyPower, you mean, the .database.available thing?
<lazyPower> yep
<lazyPower> this is where interface layers get tricky, as its expected for them to be and remain stable from inception. Breaking changes mean new interface to supercede the old one.
<lazyPower> you're probably fine to make this change, but i dont know if anyone else is using it
<lazyPower> as edsiper is the first i've encountered :)
<cmars> lazyPower, i can keep the same name -- it's arbitrary for my stuff, but as I don't have any visibility into who might be using it ... and some folks have blogged about it ... i can keep the name the same for continuity
<lazyPower> please do
<lazyPower> lets maintain backwords compat and if you need to change it, phase it out w/ a readme update to accmpany the new syntax
<lazyPower> s/syntax/state
<cmars> lazyPower, ok, updated
<lazyPower> cmars: :shipit:
<cmars> lazyPower, ty
<cmars> lazyPower, i'll test this before landing in the cloud-green fork
<lazyPower> Thanks for the work keeping the interface up to date cmars  :)
<cmars> np
<lazyPower> edsiper - if you run into any weird issues/questions, cmars here is your man.
<lazyPower> he's got the mongodb interface skillz to pay my billz
<edsiper> lazyPower, what defines the object (methods, properties) that I receive in my available function ?
<lazyPower> edsiper - do you mean that object that gets passed in during a relationship context?
<lazyPower> as in
<edsiper> yep
<lazyPower> def mongodb_has_changed(mongodb): <- the mongodb param?
<edsiper> yep
<lazyPower> ah thats an instance of the interface-layer class
<lazyPower> so whatever it defines is what you have access to
<edsiper> I need to know things like hostname, tcp port..
<stormmore> OK I ran into my first juju2 problem right out the door
<stormmore> root@10.0.0.1:~# juju init
<stormmore> ERROR unrecognized command: juju init
<lazyPower> cmars ^ if you have a second, it would be great to loop ed in on the nuances of the mongodb interface
<lazyPower> stormmore - the entire process has changed :)
<cmars> edsiper, you'll have access to the methods defined on the MongoDBClient object in your layer. which in this case is just the .connection_string() method
<stormmore> apparently and the docs havenât reflected that
* lazyPower changed the topic of #juju to: || Welcome to Juju! || Docs: http://jujucharms.com/docs || FAQ: http://goo.gl/MsNu4I || Review Queue: http://review.juju.solutions || Unanswered Questions: http://goo.gl/dNj8CP || Youtube: https://www.youtube.com/c/jujucharms || Juju 2.0 beta2 release notes: https://lists.ubuntu.com/archives/juju/2016-February/006618.html
<cmars> edsiper, that will give you a host:port to connect to mongo
<lazyPower> stormmore check /topic
<stormmore> I suspect that I have another issue
<stormmore> the only command to work so far is juju help
<lazyPower> juju help commands
<edsiper> cmars, thanks
<edsiper> lazyPower, whats the proper way to trigger a message into the logs ?, eg: I want to raise an error
<edsiper> (error message)
<lazyPower> edsiper - is this critical and worthy of feeding back to the user?
<lazyPower> you want one of two methods: charmhelpers.core.hookenv.status_set or charmhelpers.core.hookenv.log
<edsiper> thanks
<edsiper> just informational
<edsiper> lazyPower, is there a way I can use a local daabase for my charm ?, I need to store some information when relations are joined plus other things
<lazyPower> htats what unitdata.kv is used for
<lazyPower> charmhelpers.core.unitdata
<edsiper> awesome
<edsiper> lazyPower, I need to expose the user certain operations (like shell commands) that he can trigger for the running charm, what's the way to do it ?
<lazyPower> no idea what you're asking me
<edsiper> e.g: juju my-charm get_setup  <- get_setup is a custom def in my reactive
<lazyPower> there's actions... which is intended for things like dumping databases, preparing users that aren't managed by config, generating SOS reports when requesting help
<edsiper> lazyPower, the scenario is the following, once the user add relations, I need to provide an easy mechanism so he can perform some configuration for the running charm
<lazyPower> that sounds like what you're after
<lazyPower> gotta run, good luck on your charming adventure ed
<edsiper> lazyPower, e.g: the user added a mongodb relation to my charm, my charm store certain info into the key value store, now I need to provide the user some way to choose between a couple of choices to override the service configuration based in the key value store parameters
<edsiper> lazyPower, no prob, thanks in advance :)
<edsiper> aisrael, ping
<edsiper> cmars, can u give me a hand, I am getting this in my log "server.go:268 database:2: database_relation_joined" (which is OK), but I dont see my @when('out_mongodb.database.available') function be invoked, relation name is "out_mongodb", what can be the error ?
#juju 2016-03-25
<cmars> edsiper, sorry, was changing locations there.. is "out_mongodb" the name of your endpoint in your layer's metadata.yaml, under requires:
<cmars> ?
<cmars> edsiper, for example, if I had this in my layer's metadata.yaml, https://paste.ubuntu.com/15491929/, the state name would be "database.database.available"
<stormmore> damn it something is wrong in my maas config :-/
<stub> kwmonroe: I recommend using the leadership settings to distribute the masterhost:port rather than the peer relation, although the peer relation will work too.
<stub> kwmonroe: With the peer relation, after leadership failover you could end up with two units advertising masterhost:port on the peer relation (eg. when the lead unit has been powered off but the unit not removed - it still exists on the peer relation)
<stub> kwmonroe: With leadership settings, there is a single source of truth
<kwmonroe> thx stub!  that's good info.
<stormmore> almost got it bootstrapped, getting stuck downloading the tools :-/
<stormmore> am I missing something or is curl -sSfw 'tools from %{url_effective} downloaded: HTTP %{http_code}; time %{time_total}s; size %{size_download} bytes; speed %{speed_download} bytes/s ' --retry 10 -o $bin/tools.tar.gz <[https://streams.canonical.com/juju/tools/agent/2.0-beta2/juju-2.0-beta2-trusty-amd64.tgz]> not a valid curl command?
<edsiper> cmars, yes, it's out_mongodb
<stormmore> hmmm gotten hung up twice at that point :-/
<cmars> edsiper, back from dinner. what does the function that you're decorating with that @when look like? does it take just one argument?
<cmars> it probably does.. in which case, i'm running out of ideas.. other than maybe checking that the latest built charm is actually what you've got deployed and are relating to
<edsiper> cmars, yeah, one argument and basically doing a log()
<cmars> hmm
<edsiper> cmars, http://pastebin.com/H1eUhge7
<cmars> edsiper, can you paste me your layer.yaml as well?
<stormmore> ok this is messed up, how can apt-get update work but juju deploy --to 0 juju-gui cant
<edsiper> cmars, ooops, no layer.yaml
<cmars> edsiper, aha
<edsiper> cmars, I will dig into it tomorrow :D , thanks for pointing the main problem
<cmars> edsiper, add the interface, charm build, and you'll probably see that get called
<edsiper> g'night...
<cmars> edsiper, night
<edsiper> yeah
<stormmore> got love running into an ipv6 vs ipv4 confusion
<stormmore> some days I just wonder how I go to where I am when I miss the basics
<stormmore> oh I do like the default juju status for 2 :)
<stormmore> OK so I just did juju deploy juju-gui and juju expose juju-gui, juju status shows that it has the right public address and yet I keep getting connection timed out errors
<deanman> Hi, i would like to write a docker charm utilising the layer-docker charm but it is not clear to me what's the best way to do this. Do i simply clone layer-docker or can i use something from charm-tools to pass this as an argument?
<stub> deanman: You declare the docker layer in your layer.yaml. You probably want to go over https://jujucharms.com/docs/devel/developer-getting-started and https://jujucharms.com/docs/devel/developer-layer-example (the devel branch of the docs now documents layers and how to use them)
<deanman> stub: thank you!
<narindergupta1> hi I am getting this issue while deploying the latest ceph charm https://bugs.launchpad.net/opnfv/+bug/1561984
<mup> Bug #1561984: ceph-osd charm failed to deploy  <OPNFV:New> <https://launchpad.net/bugs/1561984>
<narindergupta1> can someone help me finding the solution. it seems latest changes in ceph-osd charm http://bazaar.launchpad.net/~openstack-charmers/charms/trusty/ceph-osd/next/revision/76 is causing this issue?
<jcastro> marcoceppi: did you forget to push your fixes to fiche? Still shows up busted for me on jujucharms.com
<jcastro> well not busted, just with the extra stuff
<marcoceppi> jcastro: it's pushed.
<marcoceppi> store ingestion, probably
<marcoceppi> jcastro: http://bazaar.launchpad.net/~charmers/charms/trusty/fiche/trunk/revision/2
<jcastro> jrwren: which one of you guys handles ingestion?
<jrwren> jcastro: kind of all of us?
<jcastro> can you check fiche?
<jrwren> jcastro: looks like its been pushed to before, so ingestion is disabled for that charm.
<jcastro> jrwren: what does that mean?
<jcastro> why would ingestion be disabled?
<jrwren> jcastro: because someone uploaded it using charm push?
<jcastro> oh, so how do we fix it to make it ingestible?
<jcastro> jrwren: do we need to rerun a publish or something?
<jrwren> jcastro: You cannot. Once a charm has been published using the new charm push command, that is how it is now managed. This fits with ingestion going away.
<jcastro> ok so how do we fix it so that charm is ingested?
<jcastro> tldr, how do we make the jujucharms.com page reflect what's actually in the vcs branch?
<jrwren> jcastro: AFAIK we have no means to do that. You can fix the charm in the store by pushing the new version using the charm push command.
<jcastro> marcoceppi: is that what you tried?
<jrwren> jcastro: what is wrong with it?  https://jujucharms.com/fiche/ is fiche-3 and doesn't show the extra files that were removed in bzr rev-2
<jcastro> huh
<jcastro> that directory was there an hour ago
<jcastro> but marco pushed that on 21 march, maybe I had a stale cache?
<jcastro> jrwren: ok I guess I'm all set lol, thanks.
<jrwren> jcastro: Sorry for the confusion.
<jcastro> jrwren: actually, I just realized how I did that
<jcastro> google for "fiche jujucharms"
<jcastro> the result sends me _specifically_ to revision 2 instead of trunk
<jrwren> msust be your tuned google. oh... 5th result for me.
<jrwren> I rarely look down to 5th ;]
<jcastro> is it rev 2 or 3?
<jcastro> there's a bug on this already, we need to present google with the right bits
<jcastro> I just should have been paying attention to the rev #
<jrwren> latest is 3.
<jcastro> yeah google shows me 2
<jrwren> I'd grumble about the link that says revisions: 1 on that page, but I'm told that UI is by design, so I'm not going to grumble about it and instead suggest you not be mislead by it.
<LiftedKilt> are there any known issues with bootstrapping Xenial on juju2/maas 1.9?
<LiftedKilt> Maas can deploy xenial just fine by itself, but when I try to bootstrap on xenial, it fails - maas shows the install completes, but juju complains that "instance started but did not change to Deployed state"
<LiftedKilt> juju bootstrap xenial dr --config default-series=xenial --debug
<LiftedKilt> I can bootstrap on 14.04 just fine
<skay> does anyone have a bash prompt that shows the current juju env
<skay> it could be handy on one of the machines I work on
<lazyPower> cory_fu might, i know he's a bit of a wizard at that stuff with liquid prompt
<beisner> hi LiftedKilt, i would suspect a bootstrap timeout.  ie. it may be taking slightly longer than the 10m default.  see `juju help bootstrap` to learn how to increase the time that juju will wait for the bootstrap machine
<LiftedKilt>  beisner - the actual OS installation only takes a couple minutes, and I can see the output on MAAS that the Xenial install was successful. Are you thinking it's the tools installation that might be holding it up?
<beisner> LiftedKilt, it may be a separate issue, but i also had the same "instance started but did not change to Deployed state" with xenial as it was overrunning the 10m default timeout a bit.  worth noting, the machines i'm talking about are a bit slow, and take several minutes just for POST.
<LiftedKilt> beisner: I appreciate the tip - I increased the timeout to 20 min and am redeploying. Fingers crossed that it works! I successfully bootstrapped Xenial a couple days ago, I just haven't been able to do it since. Been driving me nuts.
<A-Kaser> Hi
<LiftedKilt> beisner: good looking out - it deployed just fine. Thanks!
<beisner> LiftedKilt, \o/  awesome, happy to help.
<marcoceppi> Office Hours in about 20 mins: https://plus.google.com/hangouts/_/hoaevent/AP36tYfv6UdYcQGhr8qG33I1H0mmMK-2c5vFDcgXtVZrCEa-Tk0WuQ?hl=en - to join, otherwise tune into https://ubuntuonair.com/
<jcastro> kwmonroe: cory_fu: you guys doing office hours?
<jcastro> someone from juju core would be great!
<jcastro> cherylj: how about you?
<cherylj> jcastro: what do I need to do?
 * cherylj has nothing prepared
<cherylj> oh snap, we should talk about the new "admin" and "default" model stuff coming out in beta3
<cherylj> jcastro: and when is it?
<jcastro> just come to the hangout
<rick_h_> jcastro: linky me please and I'll join
<jcastro> it's in 5 minu8tes
<jcastro> the idea is not to be prepared, just talk
<jcastro> it's like a podcast
<rick_h_> cherylj: yea just party for a bit
<marcoceppi> Office Hours in about 5 mins: https://plus.google.com/hangouts/_/hoaevent/AP36tYfv6UdYcQGhr8qG33I1H0mmMK-2c5vFDcgXtVZrCEa-Tk0WuQ?hl=en - to join, otherwise tune into https://ubuntuonair.com/
<arosales> hoa, I initially read that as home owners assocation :-)
<bdx> office-hours: http://paste.ubuntu.com/15500937/
<c0s> kwmonroe: I don't if you guys have seen this https://d0.awsstatic.com/whitepapers/lambda-architecure-on-for-batch-aws.pdf - looks like a decent and clean explanation of the lambda-arch
<arosales> c0s: looks like an interesting read, thanks
<c0s> np
<bdx> can we attach cinder vols to nova-lxd instances?
<bdx> sorry
<bdx> rick_h: good to hear! thanks!
<bdx> marcoceppi: awesome!
<cargonza> bdx, cinder + nova-lxd is not supported yet
<bdx> cargonza: thanks, what is the status of that?
<cargonza> still working the priorities and plan. we're finishing up the target goals for 16.04. We'll consider it for future planning.
<bdx> cargonza: awesome! thanks!
<bdx> marcoceppi: great, will do!
<jcastro> LiftedKilt: heya, have you seen the layered charm bits?
<LiftedKilt> jcastro: I haven't
<jcastro> https://jujucharms.com/docs/devel/developer-layers
<jcastro> so basically
<LiftedKilt> I've heard it mentioned a few times, but haven't seen it yet
<jcastro> in the past, we made you do things the hard way
<jcastro> which is horrible
<jcastro> now you compose a charm out of layers
<LiftedKilt> jcastro: awesome - looking at link now. Thanks
<jcastro> http://interfaces.juju.solutions/
<arosales> also a good link is https://d0.awsstatic.com/whitepapers/lambda-architecure-on-for-batch-aws.pdf
<arosales> buffer!
<jcastro> so like, you could reuse a java layer the big data team uses, for example
 * arosales meant https://jujucharms.com/docs/devel/developer-getting-started
<jcastro> before everyone had to do things individually
<arosales> we should like /developer-getting-started to developer-layers
<LiftedKilt> jcastro: this will be awesome for creating the liferay charm, for example
<arosales> s/like/link
<jcastro> yes
<LiftedKilt> I can just hand this to our devs
<jcastro> LiftedKilt: in fact, just pretend the old way doesn't exist
<LiftedKilt> jcastro: gladly haha
<jcastro> so basically all the old charms weren't as testable or reusable, so like say we find a bug in one the java layers
<jcastro> you'd just pick up the bug fix
<jcastro> instead of "hey everyone for every charm using java, please fix foo."
<LiftedKilt> oh that's clean
<aisrael> charm push looks awesome
 * lazyPower raises the roof at marco's ACLs
<lazyPower> i totally did this
<lazyPower> haha
<c0s> arosales: I think if you replace Kinesis in that paper with Kafka - you'll get pretty much the same result ;)
<c0s> but not AWS-locked
<arosales> lazyPower: do you know if on "push" if one doesn't  specify the series does it error if the charm doesn't include a series metadata?
<lazyPower> i do not know, i haven't tried
<arosales> c0s: ok
<lazyPower> arosales once i'm done restoring my phone and get my MFA restored, i'll let you know :P
<arosales> lazyPower: no rush, just wondering
<bdx> aweee
<lazyPower> oh no
<lazyPower> stream cut out
<arosales> :-)
<arosales> hit wrong button
<marcoceppi> haha, I pressed "stop stream" instead of "stop sharing"
<marcoceppi> sorry everyone
<rick_h_> we were about to wrap up, it'll be ok
<rick_h_> we'll just give marcoceppi a hard time for a while :P
<marcoceppi> <3
<arosales> lazyPower: rick_h_ was saying that if you try to push a charm and it has no series in the metadata you should get an error back
<lazyPower> ah
<lazyPower> I've only pushed to implicit series locations in the store
<lazyPower> so, good question and good follow up :D
<falanx> if we were to deploy openstack with juju and use lxd for nova, would we have no persistent storage?  Is that what cargonza meant when he said cinder + nova-lxd is not supported?
<bdx> falanx: My understanding is that it is being considered as a future feature ...
<bdx> cargonza:^
<bdx> falanx: see cargonza's message above
<cargonza> falanx, the openstack lxd charm deploys ceph for storage. cinder is another type of storage that we still need to support in nova-lxd
<LiftedKilt> cargonza: so by cinder you mean the reference architecture cinder using lvm, right?
<LiftedKilt> cargonza: the openstack lxd charm integrates with ceph allowing containers to mount ceph volumes?
<cargonza> LiftedKilt, yes on both
<bdx> LiftedKilt: part of the purpose of cinder is to abstract you from the backend storage implementation
<LiftedKilt> cargonza: awesome
<LiftedKilt> bdx: my understanding is that cinder is both the reference lvm implementation as well as the name for the general block storage api integration for openstack.
<bdx> LiftedKilt: It doesn't matter to the openstack instance what the underlying storage may be ...as it is just consuming a cinder storage resource ... it doesn't matter what backend you use
<bdx> or it shouldn't at least
<bdx> thats the whole point right?
<LiftedKilt> indeed
<A-Kaser> ping admcleod- coreycb kwmonroe kjackal
<kwmonroe> pong A-Kaser
<kwmonroe> A-Kaser: i'm about 200% sure you meant cory_fu instead of coreycb ;)
<A-Kaser> right :)
<A-Kaser> bad tab
<kwmonroe> np, what's up?
<A-Kaser> I'm trying to set my credentials to test on aws
<A-Kaser> I would like to use eu-central-1 or eu-west-1
<A-Kaser> with devel documentation example it seems to be ignored my default-region
<A-Kaser> and the link to have more information credentials is https://jujucharms.com/docs/devel/credentials/ 404
<A-Kaser> could you help this little point :)
<kwmonroe> A-Kaser: i noticed that too with an azure deployment, but assumed i did something wonky.. i'll check again and open a bug (unless rick_h_ knows whether or not default_region is not always honored off hand).  anyway, you can force the bootstrap to use a region like this:
<kwmonroe> (/me looks for a minute)
<A-Kaser> and just in case, I'm the dude which sent you an email right now :)
<A-Kaser> no pb :)
<kwmonroe> A-Kaser: i think this will work for ya: juju set-default-region aws eu-central-1 (or west)
<kwmonroe> A-Kaser: then proceed with your bootstrap.  alternatively, you can set the region during bootstrap with " juju bootstrap <controller name> <cloud>[/region]"
<kwmonroe> so like, juju bootstrap kevin-is-amazing aws/eu-central-1
<kwmonroe> or whatever...
<magicaltrout> that would be a lie though
<kwmonroe> dag nab it magicaltrout - i thought you'd be in bed by now.
<arosales> I did "juju set-defatul-region" and that worked for me
<magicaltrout> hehe
<kwmonroe> arosales: s/defatul/default, but cool.  thx for the confirmation.
<A-Kaser> juju like to talk : 20 x WARNING could not delete security group "juju-4520f63e-51f6-43bd-8b89-336a0190eeef-0": resource sg-15670d6d has a dependent object (DependencyViolation)
<arosales> kwmonroe: yes what you said
<kwmonroe> :)
<A-Kaser> kwmonroe: default-region is ok now
<A-Kaser> thx
<kwmonroe> cool, np
<kwmonroe> A-Kaser: can you tell me what page linked you to your /credentials 404?  we need to fix that up.
<A-Kaser> https://jujucharms.com/docs/devel/getting-started
<A-Kaser> link just before chapter 4. Bootstrap
<A-Kaser> -> please see this guide to credentials
<kwmonroe> ahhh, yup A-Kaser, thanks
<A-Kaser> I suppose it's possible to have more than one controler, so how can I specify which controler used with my deploy command ?
<A-Kaser> (i'm trying to read "juju help deploy")
<kwmonroe> A-Kaser: firstly, we'll get the docs fixed up with this https://github.com/CanonicalLtd/jujucharms.com/issues/235, thanks for the report
<kwmonroe> A-Kaser: secondly, you choose your controller with the "juju switch" command
<kwmonroe> so if you have more than one, for example, an aws controller and a gce controller and an azure controller, you would "juju switch gce" to make that the controller that fulfills deployment requests.
<A-Kaser> oh okay, so it's not recommanded to have 2 terminal using juju on the same time with different controller I suppose
<kwmonroe> yeah A-Kaser, i don't recommend that.  you could probably do it if you were vigilient in "juju switch <foo>" before each juju interaction, but i'm not that multi-taskable.
<A-Kaser> FYI on AWS deploy wordpress don't open 80 in security group, so it's not possible to open it in a browser
<A-Kaser> kwmonroe: ok no problem I will have only one instance :)
<kwmonroe> A-Kaser: i do, however, have long-running deployments and controller, so for example, once a month, i switch to my openstack controller, juju ssh over to do something, but then switch back.  it's just not something i would do very often ;)
<A-Kaser> yes, I thought it was possible to add something like "-controler=master1" in deploy command; but switch will be fine
<kwmonroe> A-Kaser: can you "juju expose wordpress"?
<kwmonroe> i'm not certain about the correlation between juju expose and aws security groups, but if you expose wordpress, i'm curious if that lets you get to it in a browser.
<A-Kaser> I have juju 1.25 and 2.0 ... my fault, i'm removing 1.25
<kwmonroe> A-Kaser: i think they can co-exist pretty well.. just need to run 'update-alternatives --config=juju" to set whichever one you want.
<kwmonroe> or removing 1.25 and going all-in with 2.0 is cool too :)
<kwmonroe> or you could pull a magicaltrout and run from master every 2 hours.
<c0s> kwmonroe: I was looking at some of the charms description like flume-kafka and whatever.... And something caught my attention
<c0s> namely: the section on smoke testing of the deployed bundle
<c0s> like every single bundle has it. Is there a concept of auto-smoking of a unit at the end of the deployment?
<kwmonroe> c0s: kinda -- each charm in a bundle gets all the ./tests/* run when it is bundletested, so in that regard, each unit is smoke tested
<kwmonroe> one sec, i'll find an ex
<c0s> I know what you're talking about
<c0s> I have a bit different angle: instead of asking a user to smoke test a hadoop cluster, something simple and automatic could be done by Juju itself.
<c0s> like explained here
<c0s> hdfs dfs -chmod -R 777 /tmp/hdfs-test
<c0s> hdfs dfs -ls /tmp # verify the newly created hdfs-test subdirectory exists
<c0s> hdfs dfs -rm -R /tmp/hdfs-test
<c0s> hdfs dfs -ls /tmp # verify the hdfs-test subdirectory has been removed
<c0s> and then for YARN, and so on
<c0s> I think user shouldn't even see this... like ever
<c0s> does it make sense, kwmonroe?
<kwmonroe> yeah, i hear ya c0s
<kwmonroe> i guess the point of that "hdfs dfs -xxx" in the readme is only there to tell the user how to manually smoke the deployment in case they didn't use a promulgated version
#juju 2016-03-26
<kwmonroe> c0s: i need to EOW, but i'm interested in this auto-juju smoke train you're thinking about.  let's explore that monday.
<kwmonroe> A-Kaser: if you have any more questions about deploying/charming with juju2, feel free to drop them here or shoot me an email.
<c0s> sure, have a good weekend, I will put some comments on the doc for us to pick up next week
<kwmonroe> take care folks!
<c0s> cheers kwmonroe!
<A-Kaser> kwmonroe: yes I'm trying some commands :)
<A-Kaser> I would to force ec2 instance type, so I'm using --constraints instance-type=m3.large
<A-Kaser> but I have make an error and type mx.larte
<A-Kaser> /larte/large
<A-Kaser> juju status give me error message but now I'm trying to destroy the service without success
<stormmore> is any able to help me with a multi-vlan routing type question? setting a maas, juju, openstack cluster, got it to deploy juju-gui and it looks like it is getting the write IP but the routing on the deployed machine seems off since it is on a different vlan than the management one.
<stormmore> I know I am missing something super simple in my configuration, just not sure where and what is the best way to setup the correct routing
 * magicaltrout wonders if the charm store build tool takes saturday and sunday off like everyone else.....
<rick_h_> magicaltrout: no, shouldn't
<rick_h_> magicaltrout: but check out the latest office hiurs and new charm command to work around any issues there
#juju 2016-03-27
<stormmore> OK so I figured out what my routing problem was, need to setup another routing table, now I am not sure if this is something that juju or maas can manage? multi-vlan/nic with multi-gateways. If so, where do I configure the routing tables?
#juju 2017-03-20
<kjackal> Good morning Juju world!
<cnf> wow, juju is doing weird stuff
<cnf> or maybe i am
<cnf> hmm
<cnf> why won't it spin up machines?
<cnf> it seems to be ignoring the space constraints and configuration o,O
<kjackal> Hello lazyPwr, are you around?
<cnf> k, meeting, i'll debug this later
<cnf> ok
<cnf> ugh, i can't get juju to deploy anything sane o,O
<cnf> how do you debug what juju is doing?
<SimonKLB> cnf: you can watch the machine and/or unit logs using `juju debug-log`
<cnf> debug-log isn't showing anything
<SimonKLB> it's empty?
<cnf> i do a juju debug-log, and then i do a deploy, and it produces no output
<SimonKLB> what does juju status tell you?
<SimonKLB> is a machine even provisioned?
<cnf> it says a lot of things are up
<cnf> it failed to boot 1 machine, and it is using the wrong ip's for things
<SimonKLB> that's odd, the debug-log should definately say something if that's the case
<cnf> but it has  thinga booted and assigned
<SimonKLB> it might be some disconnect between your client and the machine then
<cnf> i don't get why it didn't boot the one machine
<cnf> and i don't understand why it is using the wrong network on the other ones
<SimonKLB> can you ssh into the machine using juju ssh #?
<cnf> which one?
<cnf> i can access all the ones that are up, i just used ssh without juju though
<SimonKLB> are you logged in as the default "admin" user in juju?
<SimonKLB> or are you logged in as a newly created user with custom ssh keys?
<cnf> uhm, idno?
<cnf> i can do juju ssh <machineid>
<SimonKLB> check juju whoami
<SimonKLB> oh you can?
<cnf> yes
<SimonKLB> then check the logs in /var/log/juju
<cnf> hmz, ffs, it seems it totally ignored my constraints
<cnf> and why isn;t it booting the 4th machine?
<cnf> SimonKLB: how will that tell me why it is putting things on the wrong machines?
<SimonKLB> i would make sure that one machine is deployed and installed correctly first
<cnf> well, it can't install correctly
<SimonKLB> probably just create a fresh model and deploy the super-small ubuntu charm
<cnf> it doesn't have the right networks
<SimonKLB> which provider are you deploying juju on?
<cnf> maas
<SimonKLB> ah, so youre in charge of setting up the "cloud" as well then
<SimonKLB> the issue might be with maas and not juju
<SimonKLB> if you want to get used to juju id suggest setting it up on LXD or some cloud like aws first
<cnf> lxd is local only
<cnf> and i don't have AWS credit
<cnf> and debugging why it's putting stuff wrong should be the same
<SimonKLB> yea, well it might be good to do it locally with LXD just to get to know it all before you try to create a production ready environment
<cnf> LXD is LINUX local only
<cnf> and this is a PoC, not a production env
<SimonKLB> just saying, setting up maas and juju when both are totally new to you might be overwhelming, it was for me
<cnf> well, totally new, been at this for over 2 weeks
<cnf> neither are very confidence inspiring so far
<SimonKLB> im no maas expert, so you might want to check with #maas for more network debugging
<cnf> either way, LXD doesn't really let me test constraints etc
<cnf> the network works fine
<cnf> juju is putting things on the WRONG machines
<SimonKLB> oh i thought you said it did not, my bad
<cnf> and assigning the WRONG network to services
<cnf> if it is in subnet A on one machine and subnet B on another, then they can't talk to each other
<cnf> which will obviously make things break
<SimonKLB> yea, that's what the bindings are for, right?
<cnf> and juju is ignoring them
<SimonKLB> i'd create a bug for that then
<SimonKLB> however, you should still be able to watch the logs even though the services are getting the wrong subnet
<SimonKLB> it might give you some clues
<SimonKLB> check the machine log inside the machine if youre unable to get it via juju debug-log
<SimonKLB> that is /var/log/juju/machine-#.log
<cnf> i find it annoying i can't get why it's doing this
<cnf> ugh, ffs >,<
<SimonKLB> are the spaces added to juju? juju spaces
<SimonKLB> and do the bindings correctly correlate to the spaces in juju?
<cnf> yes
<SimonKLB> which version of juju and which version of maas are you running?
<cnf> 2.1.1-sierra-amd64 and  2.1.3+bzr5573-0ubuntu1 (
<cnf> why can't i ask juju why it isn't booting a machine o,O
<cnf> hmz
<SimonKLB> cnf: there might be information in the controller node logs
<cnf> there should be a lot more debug info on this
<cnf> hmm, now it's booting all 4
<cnf> o,O
<cnf> i didn't change anything
<SimonKLB> great :)
<cnf> well, no, not when i don't know why
<SimonKLB> what you could do is checking the cloud-init log as well, that has som initial setup stuff before juju is installed
<cnf> hmz
<cnf> now all my machines came up
<cnf> but everything is red anyway
<cnf> http://termbin.com/n81q
<SimonKLB> and what does the machine log say?
<ybaumy> is there a way to display juju stautus without the relations section
<cnf> ybaumy: if you find out, please let me know :P
<cnf> hmm, juju is doing silly things with networking again! >,<
<cnf> trying to use vlan tags that are not present
<ybaumy> juju status | sed -e '/^Relation/,$d'
<ybaumy> just to let you know
<ybaumy> much nicer
<ybaumy> i will create alias for that
<SimonKLB> ybaumy: juju status --color | sed -e '/^Relation/,$d'
<SimonKLB> even better ;)
<ybaumy> ahh i can see now
<ybaumy> :D
<cnf> :P
<cnf> i'm still not sure how to deal with machine loss in juju
<cnf> atm my action for a failure is "delete model, add model, deploy again"
<ybaumy> cnf: what do you mean with machine loss
<ybaumy> one machine?
<cnf> yes
<ybaumy> a ceph osd ?
<cnf> a machine craches
<cnf> it's lost
<cnf> you have plenty in your maas, so how do you replace it?
<SimonKLB> cnf: just saw your link, your initial poc is openstack? :D
<cnf> yes
<SimonKLB> courageous!
<cnf> openstack is the _entire_ and only reason for looking at juju
<SimonKLB> i would probably start with something smaller if i were you though
<ybaumy> cnf: cant you just setup a new ceph-osd and remove the other one out of the system
<ybaumy> SimonKLB: for me too openstack is the reason im here
<cnf> SimonKLB: start with that?
<cnf> what*
<cnf> ybaumy: if it's just a single unit, i guess
<cnf> ybaumy: if it's something that has more functionality, i have to do all that manually?
<SimonKLB> ybaumy: yea juju is great at deploying complex things, but deploying openstack the first time you use juju and maas might be a bit much
<ybaumy> cnf: i guess so http://docs.ceph.com/docs/hammer/rados/operations/add-or-rm-osds/
<cnf> SimonKLB: as i said, i have been at this for 2 weeks now
<SimonKLB> cnf: just try getting a smaller deployment to work well to make sure that maas and juju is running correctly
<cnf> something "simple" workls
<cnf> as soon as I step off the yellow brick road, shit breaks spectacularly
<cnf> and no real debugging
<cnf> SimonKLB: what smaller deployment?
<cnf> wordpress?
<SimonKLB> you could try adding complex network to a simpler bundle
<cnf> that works, and has no relevance to me
<ybaumy> SimonKLB: i tried it manually the first time. and i really had problems. and it tool me a very long time to get to the point where it was working. but the thing is ..  we want our customers to setup their own cloud via a website so juju is a very good thing
<SimonKLB> try setting up wordpress + mysql with multiple subnets?
<cnf> SimonKLB: and then what? i know nothing of wordpress
<SimonKLB> cnf: what i would do is deploy it with the bindings you want later on with openstack, ssh into the machines and make sure that the network is getting configured correctly
<cnf> SimonKLB: but wordpress doesn't need 4 networks
<cnf> i think?
<cnf> idno, i never use wordpress
<SimonKLB> haha sure no, but just to have something a little bit more managable to start off with
<cnf> i don't find something i don't know and don't need "simpler" personally
<cnf> "waiting for machine"
<cnf> waiting for _what_ machine? o,O
<SimonKLB> i'd think it would be easier to debug just two machines with one charm each rather than the complete openstack bundle
<SimonKLB> especially if youre having network problems
<cnf> i'm not having network problems
<cnf> the NETWORK works fine
<SimonKLB> cnf: "waiting for machine" means the machine isnt provisioned yet
<SimonKLB> in your case it looks like the lxd containers?
<cnf> idno, i find juju very lacking in debugging tools, and ways to ask it what is going on
<SimonKLB> ssh to machine 0, check the juju logs, make sure lxd works correctly
<SimonKLB> ive never had any issue working out what is going on during provisioning from reading the logs
<SimonKLB> but if you cant reach them using juju debug-log you have to enter the machine
<cnf> neutron-gateway/0*        error     idle        0        195.130.158.10           hook failed: "config-changed"
<cnf> great
<SimonKLB> so that is super easy to debug if you check the logs
<cnf> FFS!
<cnf> it is once again trying to use the WRONG vlan id!
<cnf> hmz
<SimonKLB> ive never used vlans with juju, sorry!
<cnf> and it's frustrating it takes 25 minutes between tries
<cnf> because HP hardware is very slow in booting :P
<SimonKLB> yea we have a maas setup here as well, but i never use it until im very sure everything works as it should on other providers first
<SimonKLB> i dont want to do test-iterations on physical machines
<SimonKLB> it takes forever
<cnf> it is what i have
<cnf> hmm, how do i make it try again if i changed config?
<magicaltrout> resolved
<SimonKLB> cnf: juju resolved [unit]
<cnf> k
<cnf> hmz
<cnf> this stuff is starting to get on my nerve
<ybaumy> haha i tried to add alias as jj for that status command. and i always got error. hmm no i found out that jj is another command
<marcoceppi> tvansteenburgh: can I "add-user" with libjuju?
<tvansteenburgh> marcoceppi: when this lands https://github.com/juju/python-libjuju/pull/89
<marcoceppi> tvansteenburgh: niceee
<tvansteenburgh> marcoceppi: community contrib even
<marcoceppi> tvansteenburgh: on line 149, acl was removed, but I don't see it implemented anywhere else. How would you add-user then set grants?
<marcoceppi> tvansteenburgh: nvm, model.grant
<marcoceppi> that makes more sense
<marcoceppi> time to make a pull request
<Zic> Cynerva: hey, do you remember about my bug and certificate error from kubectl exec/logs? I just solved it!
<Zic> I'm going to post how on the GitHub's issue
<Zic> (TL;DR: something override KUBELET_ARGS in /etc/default/kubelet at my nodes, and so the --client-ca-file was not here after the upgrade)
<Zic> I think it's a mix of juju upgrade-charm vs. apt upgrade
<lazyPwr> Zic: https://github.com/juju-solutions/bundle-canonical-kubernetes/issues/238
<Zic> lazyPower: oh, welcome back :p I didn't know you were here :)
<Zic> lazyPower: yeah, it's exactly this bug
<lazyPower> Zic: yeah, illness can only defer me for so long ;)
<lazyPower> sorry you encountered this, but we are aware of the issue though. And its got a lot of weird manifestations
<lazyPower> namely those x509 errors littering the logs
<lazyPower> and some failure scenarios when doing operations
<Zic> was non-blocking as we did our urgent-debug with `docker exec` directly on the node
<Zic> (in waiting for a resolution)
<Zic> it's not confortable but it works
<lazyPower> Zic: I'm glad it was low enough impact you were able to work around it,  I'm still sorry you hit this though :(  I ack'd this change and didn't do a full upgrade test on it
<lazyPower> you can point a finger at me for this one
<Zic> lazyPower: do you want that I mark my bug as duplicate of yours so?
<lazyPower> Zic: Where did you file your bug?
<Zic> https://github.com/kubernetes/kubernetes/issues/43209
<lazyPower> Zic: nah you predate my bug filed and this is upstream so a bit higher visibility
<lazyPower> i xreffed them, thanks for filing the bug with detailed info
<Zic> :)
<Zic> I just posted ~20min ago how I workarround-ed this
<Zic> it seems that Juju does not actively override /etc/default/kubelet
<lazyPower> yeah its a couple missing flags during upgrade
<lazyPower> i'm not sure why it didn't trigger that defaults file to get re-written...
<Zic> (was a bit sceptic since /etc/default/kubelet has read-only permissions)
<lazyPower> i think we have a stale state guard somewhere that prevented teh flags from getting added on an upgrade run
<lazyPower> Zic: well we set that read-only permission so it discourages users from going to put manually edited changes in there
<lazyPower> a they wont be persisted... the next time that file gets an update it'll nuke whatever customization you put in there that isn't in unitdata for the charm
<Zic> yeah, and I feared that my KUBELET_ARGS will be overwritten in next few minutes
<Zic> but no, it holds for now :p
<lazyPower> each k8s service has a set of flags under control by this little python module i put together to manage flags.
<lazyPower> it was easier to  just probe an object than grep files... that was my reasoning anyway. So without the flags existing in there, its likely things are going to go MIA
<lazyPower> but yeah, whatever blocked it fromg etting updated is apparently blocking that whole scenario from coming true
<lazyPower> i suspect there's a stale state or data_changed guard to blame for why it didn't get the update.
<lazyPower> Zic: I should be done writing the etcd3 upgrade tests today and can move to trying to get a patch release for this tomorrow. Thanks again for following up
<Zic> other than that, the upgrade goes pretty well, and kube-dns is much more stable in this release
<Zic> don't know if it's a new kube-dns image or something else but it don't crash one time in three days
<lazyPower> :) I noticed the same
<lazyPower> its the image
<ybaumy> is the guy here who wrote that charmscaler ?
<tvansteenburgh> ybaumy: SimonKLB
<ybaumy> thanks
<ybaumy> SimonKLB: does that already work with ceph and nova-compute?
<stormmore> o/ juju world... miss me?
<lazyPower> ybaumy: it works with any charm, it depends telegraph sending CPU metrics to influxdb
<lazyPower> stormmore: always ;)
<ybaumy> lazyPower: but ceph is io. how about that one then
<ybaumy> io and space
<ybaumy> would be cool if a certain iops barries is broken new nodes are added as well as low space issues
<ybaumy> sorry for my typos
<ybaumy> i need beer soon
<lazyPower> ybaumy: it doesn't have support for metrics outside of CPU that i'm aware of today (room for improvement for future iterations)
<ybaumy> lazyPower: ok thanks
<lazyPower> i talked to SimonKLB about this briefly during the review cycle, and they're talking about adding more metrics to use in the charmscaler
<lazyPower> but i dont have an ETA or what those metrics would be
<ybaumy> lazyPower: good to know that somebody is working on something like that.
<lazyPower> SimonKLB: if i've grossly misrepresented anything here, i apologize :) Please correct me and cc me on the response :)
<lazyPower> ybaumy: yeah :D They started with an openstack scaler and took a step back and wrote a generic juju scaler, which is pretty choice that it now works with more things than just openstack vm's
<ybaumy> lazyPower: thats cool even if im just for openstack here .. currently
<lazyPower> ybaumy: it works there too ;)
<ybaumy> lazyPower: i understood that
<ybaumy> beer and football. bye
<Cynerva> Zic: cool, glad you were able to get that fixed :)
<Cynerva> Zic lazyPower: I feel dumb for not putting two and two together, but we saw similar issues with flags not updating on kubernetes-worker in our snap branch, and ended up putting a fix in there
<lazyPower> Cynerva: we have 4 parallell long running branches of kubernetes right this minute, i'm not surprised at all things are getting messy
<Cynerva> heh, yeah
<lazyPower> Cynerva: fyi, i pinged you matt rye and marco on the revised registry action i validated this morning
<Cynerva> hopefully we can get the fix landed with our branch, but i'll update the issue in case we need to fix it separatel
<lazyPower> it didn't look like it was going to collide with anything, but i do want you rinput to see if any of that will cause a headache with your snap branch, or tims gpu.
<lazyPower> the only things that were updated is the action yaml, and readme, the rest was isolated.
<lazyPower> oh and ingress configmap bits
<Cynerva> lazyPower: okay, looking
<lazyPower> that might be fun with our addon compiler
<lazyPower> we'll see though.
<stormmore> lazyPower, probably like a hole in the head ;-)
<lazyPower> stormmore: i have 2 of those
<lazyPower> actually i dont know where i'm going with this... so nevermind.
<lazyPower> tvansteenburgh: marcoceppi - is there a way for me to run an upgrade-charm --switch in amulet? I didn't see anything while skimming the api docs.
<marcoceppi> lazyPower: you have access to a `juju` method which you can pass in any args you want
<lazyPower> marcoceppi: self.d.juju?
<marcoceppi> lazyPower: looking
<lazyPower> is this right off the deployment object or is this a different object all together?
<lazyPower> ah ok ty
<stormmore> lazyPower, do we have an eta on when that bug we identified last week will be implemented?
<lazyPower> stormmore: x509 certs?
<lazyPower> stormmore: related to worker upgrade path from pre 1.5.3?
<stormmore> yeah
<marcoceppi> lazyPower: https://github.com/juju/amulet/blob/master/amulet/helpers.py#L61
<lazyPower> marcoceppi: ty
<lazyPower> stormmore: i've got that on my list to tackle first thing tomorrow and propose a hotfix for it
<lazyPower> stormmore: i was going to work with Cynerva who already has a branch fix for it, but that particular branch has not been folded back into master yet, so cherrypicking ftw
<lazyPower> marcoceppi: i have a more advanced question thats going to be messy i think...
<lazyPower> marcoceppi: how does amulet resolve the current charm under test? eg: d.add('etcd') is all you do...  there's some derivative of that, that puts things together for the author that I wont have by routing to this helpers.juju method.
<marcoceppi> lazyPower: what.
<lazyPower> well, i probably phrased this all wrong
<lazyPower> so in amulet, you d.add('thing') and it resolves where 'thing' is and puts it in /tmp and deploys that, right?
<lazyPower> i cant just say --switch /path/on/disk and expect this to work in CI without using the same path resolution amulet uses to determine where my local charm is.
<lazyPower> or am i overthinking it?
<marcoceppi> lazyPower: you're trying to validate an answer whichout asking a question. What are you trying to? I might be able to give you a better path forward
<lazyPower> marcoceppi: i need to deploy etcd from the store at revision 24, it will be the last deb based release. To ensure i dont break upgrades moving forward, i'm adding a test to deploy the local charm as an upgrade from revision 24
<marcoceppi> lazyPower: ah, so that's probably problematic
<lazyPower> yeahhhhh
<lazyPower> i was hoping that wouldn't be the case. I can probably shell script this enough to be executed by bundletester as a class of tests
<marcoceppi> because d.add will see you're adding etcd, regarldess of version and just laugh at you and say "this isn't what you want"
<lazyPower> eg bundletester -Y upgrade-test  whre it does env setup in bash and hten hands over ot amulet
<marcoceppi> and deploy the local copy instead
<marcoceppi> give me 2 mins to get you some samply script
<stormmore> lazyPower, ah the fun! no worries, I am trying to get caught up with documentation right now anyway
<marcoceppi> lazyPower: this is what will mess you up: https://github.com/juju/amulet/blob/master/amulet/charm.py#L54
<lazyPower> marcoceppi: yeah thats exactly what i was thinking
<lazyPower> is that the path resolution amulet uses by default is what i should be using, but as you're calling out, its going to punch me in the face
<marcoceppi> lazyPower: so you couldn't ever really deploy a prev version
<marcoceppi> you could put d.add('etcd-24') but that will simply boil down to "etcd is what I'm testing, so lets use the repo on disk not the store"
<marcoceppi> it was code too clever
<lazyPower> marcoceppi: so how about the proposed work-around where it becomes a separate test configuration, that is initially setup with a bundle file, and then it manually upgrades the charm and drops into the amulet test suite associated with that scenario? would that be good enough if amulet can just source whats in the model and say "ok, i can proceed now?"
<marcoceppi> lazyPower: that should workd
<lazyPower> ok let me strawman this out and see if i can get anywhere in 40 minutes.
<lazyPower> if not i'll bug you again about possible redirection
<marcoceppi> lazyPower: yeah, we can patch d.add() to handle strongly typed charmstore urls differently than lazy typed ones
<marcoceppi> stokachu: how is the OSX support for conjure-up coming along?
<stokachu> marcoceppi: still on the todo list, getting jaas support added now
<kwmonroe> stokachu: i'll be your huckleberry if/when you need a conjure-up tester ^^
<kwmonroe> i happen to have a shiny macintosh
<stokachu> kwmonroe: thanks, yea i am trying to finish this other stuff up to get to that next
<kwmonroe> no problem stokachu, there's still plenty of time before EOD ;)
<lazyPower> marcoceppi: i think this workaround will work. its a bit meat-fisted because its executing between lint/proof steps but i'll eat the time differential to have a functional automated test.
<lazyPower> will bug and try to flesh this out more later. thanks for getting me unblocked
<stormmore>  I blame lazyPower for the split :P
<lazyPower> i would too
<lazyPower> that guy
 * lazyPower shakes a tiny fist
 * stormmore bites his lip trying not to make a joke or 2 that would get him in trouble
<SimonKLB> lazyPower: just read your response to ybaumy and I'm not disagreeing with anything
<lazyPower> glad i'm on message :)
<SimonKLB> adding more machine metrics would be super-easy, it's more of not adding too much and making the charmscaler too complex
<lazyPower> SimonKLB: perhaps flavors of charm scaler?
<SimonKLB> but custom application-specific metrics would also be possible, that would need some extra work though
<lazyPower> IO Scaler, CPU Scaler, Mem scaler - COMPLICACATED_BUT_ALL_INCLUSIVE scaler.
<SimonKLB> yea, that is a possibility, wouldnt want like 10 different charms to keep track of though :D
<lazyPower> SimonKLB: layers baybeh
<SimonKLB> true that ;)
<SimonKLB> that could actually be really neat, i'll definately add it to the backlog and see if it gets traction by the rest of the guys :)
<magicaltrout> folks started using the CDK today lazyPower so you can officialy say your code is in use by DARPA and NASA
<magicaltrout> i might have to buy you a badge or something
<lazyPower> O_O
<lazyPower> Life Achievement unlocked!
<magicaltrout> heh
<lazyPower> mbruzek1: Cynerva ryebot ^
<stormmore> I may have to "play" with the autoscaled kubernetes bundle
<lazyPower> magicaltrout: glad you got everything sorted though :) I was nervous based on last weeks direction of conversation
<SimonKLB> stormmore: please do! and let me know how it went :)
<lazyPower> namely missing layers from your DOCKER_REPOSITORY
<magicaltrout> yeah, well I don't know what happened there, but the trusty nuke option seemed to fix it
<magicaltrout> we've done that often enough outside of CDK anyway, i think its pretty standard ;)
<lazyPower> because yay docker
<lazyPower> \o/
<stormmore> SimonKLB, it might be a bit but from what I am reading it should solve a problem I haven't figured out yet
<ryebot> lazyPower: haha awesome :D
<kwmonroe> if <solar-flare>; ./hooks/update-status; fi
<magicaltrout> hehe
<kwmonroe> status-set "oh noooooooo"
<magicaltrout> is that the one where they miss mars again due to doing the sums wrong?
<kwmonroe> sums{ locale: en-US }; # calculate burn based on kilometers-to-target; burn( sums{ locale: $LOCALE } ).
<kwmonroe> fool-proof
<kwmonroe> https://jujucharms.com/login/u/spicule/saiku-hadoop-spark
<kwmonroe> heh, magicaltrout ^^ disregard that paste.. i was formulating a question for you
<magicaltrout> uh oh
<kwmonroe> magicaltrout: what would it take to rebase saiku-h-s to the most current hadoop-spark?  iow, can i swap out the hadoop-spark that you have (https://jujucharms.com/u/spicule/saiku-hadoop-spark) with this one?  https://jujucharms.com/hadoop-spark/
<magicaltrout> i doubt that saiku bundle will spin up at the moment. I have another one getting ready here i'm hoping to drop next week
<magicaltrout> oh the download does work
<magicaltrout> it probably will start
<magicaltrout> well there wasn't anything overly special in that bundle so it should just get an update
<kwmonroe> weeeeellllll magicaltrout.. i think you were basing that bundle off apache-* charms, which have been replaced by bigtop-* stuffs.  if the saiku bits don't care (which i think they don't), then the apache->bigtop hadoop swap shouldn't make a difference.
<magicaltrout> yeah, i have a bunch of stuff to get done here, I've got a new guy starting in May who will hopefully pick up a bunch of this slack, and i'm finally getting my charms into CI slowly
<kwmonroe> no worries magicaltrout -- i just noticed that bundle because i'm fixin to release a refresh of the big data charms to align with the bigtop 1.2 release. anyone reliant on the spark interface may want to consider including the new bits.
<magicaltrout> don't break shit
<kwmonroe> 2 late
<magicaltrout> boo
<magicaltrout> I have some big data stuff coming in the next month or so
<kwmonroe> phew!  then it'll take a month or so before you realize i broke shit.
<magicaltrout> excellent
<kwmonroe> see ya next spring, future magicaltrout
<magicaltrout> check your msgs kwmonroe
#juju 2017-03-21
<kjackal> good morning Juju world!
<chrome0> I'm trying to upgrade juju 1.25.6 -> 1.25.10 but it seems to fail right at the beginning and not even starting the upgrade. http://paste.ubuntu.com/24220847/
<chrome0> Any idea what's going on here? At machine #0 I can't see anything that even registers as a upgrade attempt
<kklimonda> if one of the machines fails during bundle deployment, can I tell juju to retry this one machine and all units that depend on it?
<kjackal> kklimonda: this is a rather charm/specific issue. The charms should stay in waiting/blocked state and react to other charms states
<kjackal> kklimonda: so as soon as you resolve the issue with the machine/charm the rest of the bundle should react to that
<kklimonda> but what if MAAS has failed to deploy the machine?
<kklimonda> I can't retry it, as juju plumbing would be missing
<ybaumy> juhu juju
<ybaumy> how do i change the cpu overcommitment values in nova.conf .. it says that its maintained by juju
<ybaumy> i want a 20:1 ratio
<ybaumy> i know thats high but its just for test instances
<jianghuaw_> Hi, I deployed a bundle with maas; but one of the machines got broken (stuck with no access to this machine). So I marked this machine as broken. Now how can I proceed the deployment?
<jianghuaw_> Any advice? Is it possible to make it to allocate a new machine and proceed the remaining application deployment?
<kjackal> jianghuaw_ kklimonda: you can try a juju retry-provisioning . If that fails you can deploy the failing application in another machine and re-add the relations
<kjackal> if any
<kklimonda> kjackal: thanks, does it work for both machines and lxc containers?
<kklimonda> looks like what I'm looking for
<jianghuaw_> kjackal, thanks.
<kjackal> kklimonda: yes works for any provider. The bundle is just a "script"
<kjackal> ybaumy:  I do not see anyhting in https://jujucharms.com/nova-compute/266 . You could ask at #openstack-charms
<cnf> hmm, ok
<cnf> jamespage: this is the result from your bundle http://termbin.com/dy1u did i do something silly? you where going to test it, right?
<jamespage> cnf: I think what I said was I don't have anywhere to test this right now :(
<jamespage> cnf: juju status --format=yaml might tell us more
<cnf> ah, ok :P
<cnf> message: '{"hostname": ["Node with this Hostname already exists."]}' o,O
<cnf> that's on an lxd container
<cnf> also
<cnf>          current: provisioning error
<cnf>           message: 'unable to setup network: host machine "0" has no available device
<cnf>             in space(s) "space-openstack-mgmt"'
<cnf> but it does?
<cnf> http://termbin.com/va82
<cnf> oh, nm
<cnf> it seems MAAS is being weird again?
<cnf> wtf?
<cnf> jamespage: ok, back to #maas it seems omO
<cnf> or is it
<cnf> bah! i don't know
<cnf> this is frustrating
<icey> is there a good way to make an interface that's local to a specific charm rather than a global interface on interfaces.juju.solutions?
<marcoceppi> icey: yeah, just put it in INTERFACE_PATH (and set that environment variable)
<icey> marcoceppi: ideally I'd love to have it in tree for this charm
<marcoceppi> icey: why?
<marcoceppi> kind of defeats the purpose of interop
<icey> marcoceppi: it's an existing interface; I'm porting a charm (old, bash + python) into layers + reactive. The interface is very explicitly only for this application
<marcoceppi> icey: why even have an interface at all
<marcoceppi> if nothing will ever connect to it
<icey> marcoceppi: there are things that connect to it, but nothing else will provide it
<icey> I suppose I can make a full interface and host it
<marcoceppi> icey: well the interface on interfaces.juju.solutions include the provides and requires parts
<icey> marcoceppi: yeah
<marcoceppi> icey: yeah, I would go the full monty if other things consume (or provide) it
<icey> the other things consuming it already have written their code to do that ;-)
<marcoceppi> icey: but new applications can now use the layer you produce :)
<icey> indeed marcoceppi
<marcoceppi> stokachu: https://github.com/conjure-up/conjure-up/issues/750 any thoughts?
<stokachu> marcoceppi: what does 'sudo apt-get update' give you? does it work?
<marcoceppi> stokachu: no, 403 as well
<stokachu> hmm
<stokachu> so we rely on either that ppa or the snap version of lxd
<stokachu> marcoceppi: are you behind a proxy or anything?
<marcoceppi> at home
<stokachu> maybe that needs a refresh
<marcoceppi> oddly enough, works elsewhere
<marcoceppi> nvm
<stokachu> got it?
<marcoceppi> actually, nope
<marcoceppi> okay, I think I figured it out
<stokachu> what was it?
<marcoceppi> bad apt proxy config
<marcoceppi> well, apt proxy config, which was fine, but not for ppas
<stokachu> ah ok
<marcoceppi> now I have to fight with snapd
<jrwren> marcoceppi: apt-cacher-ng ?
<marcoceppi> squid-deb-proxy
<cnf> hmz ok
<cnf> lets try this again :(
<cnf> it seems juju breaks my network config on maas nodes
<cnf> and then complains about it
<cnf> :(
<cnf> it tries to move an ip to a bridge
<cnf> and then fails?
<cnf> can anyone help debug this?
<cnf> i'm stuck at http://termbin.com/3z4n
<cnf> hmm, this is really frustrating
<cnf> jamespage: are you available?
<cnf> hmm, how do i upgrade a controller?
<cnf> 2.1.2 has something about one of the errors
<cnf> >,<
<cnf> ugh, wtf
<cnf> upgrade instructions don't work
<rick_h> cnf: you did juju switch controller and juju upgrade-juju?
<cnf> rick_h: yes, upgrade-juju doesn't work
<rick_h> what does it say?
<cnf> $ juju upgrade-juju
<cnf> no prepackaged tools available, using local agent binary 2.1.2.1
<cnf> ERROR no matching tools available
<rick_h> cnf: hmm, what did you deploy from?
<cnf> 2.1.2-sierra-amd64
<cnf> it is what i used to bootstrap
<cnf> well, i used 2.1.1 to bootstrap
<rick_h> cnf: hmm, so it couldn't find a 2.1.2 for the architecture and tried to use yours but you're on osx which of course can't be uploaded.
<cnf> right
<rick_h> cnf: what is this installed on?
<cnf> an ubuntu vm
<rick_h> cnf: the controllers?
<rick_h> not sure why it wouldn't be able to upgrade an ubuntu vm to 2.1.2. What version is it at now? juju show-controller xxxxx
<cnf> 2.1.1
<rick_h> on a xenial VM?
<rick_h> ubuntu xenial that is
<cnf> yes
<cnf> i'm trying to upgrade because https://jujucharms.com/docs/devel/reference-release-notes says "[juju] Handle 'Node with this Hostname already exists' errors when provisioning containers. LP:#1670873"
<mup> Bug #1670873: juju fails when requesting an IP for a container when retrying after lxd forkstart <oil> <oil-2.0> <juju:Fix Released by jameinel> <juju 2.1:Fix Released by jameinel> <https://launchpad.net/bugs/1670873>
<cnf> and my deploy is failing, and this is one of the errors in my logs
<cnf> don't know if this will fix it, but hey
<rick_h> cnf: k, that sounds good. I'm not sure why the upgrade wouldn't find the 2.1.2 agents though.
<cnf> uhu
<rick_h> cnf: so the controller is a VM, how did you get it setup? With the manual provider? I thought I saw something about MAAS earlier?
<cnf> juju model-config shows the right proxies are set
<cnf> rick_h: juju bootstrap
<cnf> it's a KVM running on the MAAS controller, and added as a MAAS machine
<rick_h> cnf: on the vm itself using localhost?
<cnf> so it is a VM, but it looks like a maas machine to juju
<rick_h> cnf: ah ok, so yea using the maas provider bits
<cnf> yes
<cnf> so i use my laptop to run the juju command against a maas / juju setup in the lab
<rick_h> balloons: have a sec? can you think of why the controller wouldn't find agents for 2.1.2 on xenial/maas? ^
<rick_h> cnf: makes sense
<balloons> rick_h, what does the log show?
<cnf> which log?
<balloons> bootstrap -- try running it with --debug too
<balloons> the obvious answer is if the maas doesn't have outside internet access
<cnf> http://termbin.com/p59f
<rick_h> balloons: it's already bootstrapped and running
<cnf> balloons: the bootstrap was done 2 weeks or so ago
<rick_h> balloons: so he's trying to juju upgrade-juju to go from 2.1.1 to 2.1.2
<balloons> what's the controller version?
<balloons> Are we sure it thinks it's 2.1.1?
<rick_h> balloons: using show-controller that's what cnf says?
<cnf> and it was installed 2 or 3 weeks ago :P
<cnf>  agent-version: 2.1.1
<balloons> ack
<rick_h> cnf: can you run the 'juju upgrade-juju' with --debug on it and see if it outputs something helpful on where it's going and we can verify it can reach it?
<cnf> rick_h: i did, and pasted the output above
<cnf> http://termbin.com/p59f
<rick_h> oh, sorry I missed it
<balloons> it's weird because I don't see it doing anything beyond starting the initial search
<cnf> yeah
<cnf> and juju model-config shows the right proxies are set
<balloons> cnf, can you bootstrap a new 2.1.2 controller? Just to make sure you can see streams?
<cnf> balloons: streams?
<cnf> btw, "juju status" says "upgrade available: 2.1.2"
<cnf> balloons: and bootstrapping a new controller would be about an hour work
<cnf> i have nothing ready to take it, atm
<balloons> cnf, no worries. Don't want to try that then
<cnf> i'll put that on the last resort list :P
<cnf> it's possible, but i'd rather go for the easer debugging first, if we can
<balloons> cnf, sync-tools may also be an ok test
<balloons> if you run with juju upgrade-juju --dry-run or juju upgrade-juju --dry-run --agent-version 2.1.2
<balloons> what happens?
<cnf> $ juju upgrade-juju --dry-run --agent-version 2.1.2
<cnf> upgrade to this version by running
<cnf>     juju upgrade-juju --agent-version="2.1.2"
<cnf> :P
<balloons> juju sync-tools --public --debug --version 2.1 --local-dir=. --dry-run --stream=released
<cnf> http://termbin.com/qx52
<balloons> k, so it can see agents just fine
<cnf> balloons: 16:44:55 DEBUG juju.environs.simplestreams simplestreams.go:454 skipping index "file:///Users/cnf/tools/streams/v1/index2.json" because of missing information: "content-download" data not found
<cnf> is that normal?
<balloons> cnf, do you have local streams? Where did that come from?
<cnf> balloons: i took away the --dry-run from that last command
<andrew-ii> A model can't reasonably share machines, right? Like, two models can't coexist?
<cnf> hmm
<balloons> cnf, juju show-controller
<cnf> balloons: https://bpaste.net/show/60fec9e338f8
<rick_h> andrew-ii: no, the little watchers running on there would probably get pretty confused
<andrew-ii> rick_h: Thanks - I was pretty sure it was nonsensical, but I didn't find the verbage (obvious as it may be)
<cnf> balloons: i'm both glad and worried this stumps you as well :P
<cnf> glad because it means i wasn't doing obviously stupid stuff, and worried because debugging it is going to be a pita
<balloons> cnf, do you know the history of the controller? How was it created, and what's happened to it along the way?
<cnf> balloons: yeah, it's all me
<cnf> i bootstrapped it 2 or 3 weeks ago
<cnf> from this very machine
<balloons> using the 2.1.1 client right?
<balloons> cnf, juju model-defaults
<cnf> yes
<cnf> http://termbin.com/hoqz
<cnf> model-config has proxy overrides
<balloons> cnf, how about model-config then as well :-)
<cnf> http://termbin.com/detm
<Budgie^Smore> o/ juju world
<balloons> cnf, juju model-config logging-config=juju.apiserver=trace
<balloons> juju model-config -m controller logging-config=juju=trace
<cnf> ok, and then juju upgrade-juju again?
<balloons> yea, with debug. I don't think in this case it will show anything more
<balloons> However, we're not getting a good return on finding the agents from streams.
<balloons> Have you upgraded controllers before?
<cnf> it doesn't show anything more
<cnf> no
<cnf> balloons: i'm brand baby new to juju / maas
<balloons> There may be an issue with it not respecting proxy on upgrade
<cnf> (been quite a frustrating experience so far :( )
<balloons> It's clear I think that it's not seeing streams. It should return something
<balloons> It's possible to bootstrap another controller and migrate your workload, or do use sync-tools and manually push the agent to the controller to upgrade it
<cnf> i did sync-tools, so i have them all locally now
<cnf> where should they be on the controller?
<cnf> (though it sounds like a bug of sorts)
<balloons> cnf, yes, you found a bug indeed I believe. I'll have to repo it, but I'm thinking that's it
<cnf> $ pwd
<cnf> ubuntu@juju-controller:/var/lib/juju/tools$ ls
<cnf> 2.1.1-xenial-amd64  machine-0
<cnf> btw
<cnf> that is on the controller
<balloons> right, so we'll need to make a local stream, then tell the controller about it, and upgrade using it
<cnf> ok
<balloons> ok, so you ran synctools and have them all in a local folder?
<Budgie^Smore> lazyPower, you about today? do you remember us talking about elasticsearch integration?
<cnf> balloons: http://termbin.com/mq29
<lazyPower> Budgie^Smore: in what context?
<cnf> http://termbin.com/rdvo is cleaner :P
<Budgie^Smore> lazyPower, from memory (so don't quote me on this, slept a few times since then), you were already testing it as part of the CDK bundle
<lazyPower> Budgie^Smore: when we first released the CDK bundle we had elastic beats + elasticsearch + kibana as part of the bundle
<lazyPower> so the integration points already exist
<Budgie^Smore> so add charm, relate and deploy?
<lazyPower> That should be the basics, yeah
<lazyPower> Budgie^Smore: give me a bit i'm ina meeting
<lazyPower> Budgie^Smore: but you should be able to deploy beats-core and add the beats=>kube relations, configure the beats and that should be the basics of the operation though.
<lazyPower> there's a new elasticsearch charm incoming from bdx that targets ES 5.x
<lazyPower> might be worth tracking that work as well
<Budgie^Smore> no worries, about to head to the office
<balloons> cnf, sorry just a moment
<cnf> sure
<balloons> cnf, which model are you in?
<cnf> controller
<cnf> or i should say admin/controller i guess
<balloons> cnf, so my idea isn't possible to avoid you bootstrapping another controller
<cnf> :(
<balloons> I beat on it a bit, but apart from manually placing the tools and editing the db, it's not going to happen
<cnf> do you know what the bug is?
<cnf> i don't fancy doing this again next time :/
<balloons> cnf, I'd encourage you to post to the list to get feedback on others in locked down maas environments on how they manage things
<balloons> the collective knowledge is better than me
<cnf> i'm not quite sure what to post
<cnf> besides "it doesn't work"
<balloons> cnf, well you shouldn't be doing any of this. upgrade-juju should just work
<cnf> yes, it should :P
<cnf> but if i don't know why it doesn't, i don't trust it will work next time
<balloons> cnf, can you ssh into the controller
<cnf> yes
<balloons> we actually never proved the controller has a good proxy
<balloons> juju ssh -m controller 0
<balloons> then try grabbing from streams.canonical.com
<cnf> <h1>Index of /</h1>
<balloons> cnf, when you bootstrap, the agent is uploaded by the client to the controller. The deployed machines then get the agent from the controller. So it's possible the controller doesn't have access at all
<cnf> curl can get to it just fine
<cnf> unless juju just ignores the set proxies
<balloons> via proxy yes?
<cnf> yes
<balloons> are we sure the proxy set in juju is correct?
<cnf> uhm, no?
<cnf> i don't know what "correct" is besides "juju model-config"
<cnf> it's set in /etc/apt/apt.conf.d/95-juju-proxy-settings
<cnf> and it's set in the env
<cnf> so i don't know what else should be set
<cnf> unless the tool that does the downloading from the stream ignores this?
<balloons> ahh, we did confirm juju status shows upgrade availible yea?
<balloons> so the proxy works, the values set in model-config match your expectations, and juju can kind of see the upgrade, since it tells you it exists
<cnf> yes
<cnf> controller  dsmaas-controller  dsmaas        2.1.1    upgrade available: 2.1.2
<cnf> right
<balloons> cnf, did you bootstrap with those proxy settings?
<cnf> yes
<balloons> did you show me juju model-config -m controller?
<kklimonda> whom is conjure-up targetted at?
<kklimonda> (for example I seem to be missing the point of deploying OpenStack with one command, given how complicated software it is, and how much planning ahead is required)
<kklimonda> is this for demos and lab?
<balloons> stokachu, ^^
<bdx> kklimonda: getting the initial base infrastructure stack deployed successfully is one thing, maintaining it over time is another
<cnf> balloons: yes,  but i can show it again :P
<balloons> cnf, ty :-)
<cnf> balloons: http://termbin.com/69cn
<bdx> kklimonda: the ability to spin up openstack, or any other complex software stack with a single command is really the polish on the block
<balloons> Anyways, if you wouldn't mind posting to the mailing list about your issues, that would be lovely. It would also be useful for you to ask how folks best handle upgrades in these situations, though, it should work.
<kklimonda> bdx: yes - but that's my point. what does it bring to the table over juju?
<kklimonda> it just seems to be another layer of indirection
<bdx> kklimonda: its a layer of useability
<cnf> i don't know the mailing list
<cnf> i'm generally not a fan of mailing lists
<balloons> cnf, ahh. Well, you certainly don't have to post. I can file a bug about it, but it may be useful for you to do so, so you can track it: https://bugs.launchpad.net/juju/+filebug
<cnf> what do i call it?
<cnf> juju upgrade-juju fails ?
<bdx> kklimonda: I don't want the users of my charms maintaining yaml configs all over the place, its easier for me, and my users, if I create spells for theses infrastructure stacks so as the deploys can be interactive and intuitive
<cnf> it's so generic :/
<balloons> cnf, juju upgrade-juju doesn't honor proxy settings
<bdx> kklimonda: especially for openstack .... your config.yaml for an openstack bundle can end up being 1000+ lines
<balloons> I wonder if I can repo quickly actually
<cnf> bdx: been trying to deploy openstack with juju for 3 weeks, little polish to that :(
<cnf> balloons: are you sure it's the proxy ?
<balloons> cnf, did you try conjure-up, heh?
<kklimonda> bdx: yes, but that configuration (and the decisions behind it) still have to be made
<balloons> cnf, your log indicates you don't get anything back from the version check
<balloons> cnf, it should return, nothing to upgrade, or XXXX found. You get nothing and it drops to trying a locally built one
<bdx> cnf: I have successfully deployed openstack with juju in a myriad of different ways, let me know if you need some insight, I would be glad to give you some pointers if needed
<cnf> bdx: i can't get anything sensible out of juju so far
<zeestrat> kklimonda: As another side of the tale, we find the extra layer of abstraction to give little value so we stick to Juju.
<cnf> can't even upgrade it, it seems
<bdx> kklimonda: conjure-up also allows for different types of provisioning automation not available via vanilla juju
<bdx> kklimonda: e.g. lxd-profiles
<cnf> balloons: https://bugs.launchpad.net/juju/+bug/1674759
<mup> Bug #1674759: juju upgrade-juju doesn't honor proxy settings <juju:New> <https://launchpad.net/bugs/1674759>
<cnf> sorry if it's a bit susinct, i'm tired and hungry atm
<balloons> cnf, I know we didn't want to bootstrap a controller, but I would encourage you to try upgrading with https://jujucharms.com/docs/2.1/models-migrate. It's likely how you should manage models in production anyway
<balloons> ie, bootstrap a 2.1.2 controller, then migrate your models. Finally, teardown the 2.1.1 controller
<cnf> balloons: well, that assumes you have spare machines to do this :P
<balloons> cnf, yea, that's the blessing and curse
<cnf> i'll have to do that twice
<cnf> once away, once back
<balloons> why twice?
<cnf> and bootstrapping a new machine takes 20 minutes on its own
<cnf> balloons: once to a machine, then back to the vm
<bdx> kklimonda: look at the example of the kubernetes spell for lxd provider .... conjure-up lends to some really cool extended functionality where you can use pre/post scripts to modify things outside of your juju environment
<cnf> conjure-up looks like even more magic ontop of juju
<cnf> i can't get juju to behave sanely, i don't want even more magic personally
<bdx> cnf, kklimonda: https://github.com/conjure-up/spells/blob/master/canonical-kubernetes/steps/00_pre-deploy
<bdx> if allows you to do cool and important things that you guys aren't really looking at or taking into account
<cnf> bdx: that looks ugly, what's that for?
<balloons> ty for the bug report cnf
<bdx> cnf: that is a conjure-up pre-deploy script - it runs prior to conjure-up deploying your juju stuff to configure the lxd profile https://github.com/conjure-up/spells/blob/master/canonical-kubernetes/steps/lxd-profile.yaml#L6
<cnf> o,O
<bdx> cnf: things like this (customizing lxd profiles) are a huge hassle as well as a blocker for people trying to deploy things to containers that need special modifications
<cnf> and how do i install conjure-up?
<bdx> cnf, kklimonda: if you aren't wise to all of things you would need to do to lxd to make it support deploying kubernetes ... these things are encapsulated in the conjure-up workflow/spells
<cnf> i don't care about lxd, much
<bdx> cnf: are you familiar with snaps?
<cnf> no
<bdx> cnf: you are trying to ride the horse without knowing well the saddle
<cnf> ...
<kklimonda> bdx: I mean, I understand what you're saying but someone will have to understand all this once the deployment is done
<cnf> conjure-up is available on both Ubuntu Trusty 14.04 LTS and Ubuntu Xenial 16.04 LTS
<cnf> no ubuntu here
<lazyPower> work is underway to port it to MacOS, but that's still pending.
<bdx> cnf: you are going to have a tough time trying to run all/any of this from non Ubuntu Xenial
<cnf> bdx: so no thank you
<bdx> cnf: conjure-up is delivered as a snap - which isn't a thing on non-ubuntu systems
<lazyPower> bdx: thats a lie
<bdx> :-0
<lazyPower> bdx: snap is supported on centos, debian, arch, sles
<bdx> oooh
<bdx> thanks LP!
<lazyPower> bdx: <3 happy to alley oop some knowledge
<bdx> cnf: srry
<cnf> so you don't know your own tools...
<cnf> either way, as long as juju isn't reliably, no amount of magic on top of it will make me trust it
<bdx> cnf: I'm just a community member, I don't work for canonical .... I try to know the tools as well as possible, I didn't know snap was supported across other os's srry
<bdx> kklimonda: have to know what?
<cnf> you don't deploy / run openstack without knowing the details
<bdx> cnf: what details?
<cnf> yeah, you don't run openstack, i take it
<cnf> balloons: where is the mailing list, btw?
<bdx> yeah ... I have been since it was released to the public ... before there was automation
<kklimonda> bdx: for example, even if conjure spell manages apparmor profiles on host machines for the user, that doesn't mean this is removing any burden - you still have to understand what has been changed, and why, or you'll have a bad day later.
<balloons> cnf, https://lists.ubuntu.com/mailman/listinfo/juju
<cnf> bdx: in production, with people using it?
<bdx> cnf: yea man
<bdx> deployed via juju
<cnf> and you suggest people deploy openstack without knowing about how it works? what network is used for that
<cnf> or what hardware is holding ceph data etc
<bdx> cnf: that is the admins responsibility to track that information
<cnf> balloons: oh, you need to join it?
<cnf> o,O
<bdx> cnf: juju/maas really help there too
<lazyPower> cnf: our goal is not to remove the knowledge requirement for management over time, its to remove the knowledge barrier to get started. We still advcoate you should read teh book at least once, we want to abstract away that requirement though.
<balloons> cnf, well juju@lists.ubuntu.com if you just want to send a mail. But replies posted only to the list obviously won't reach you
<cnf> bdx: for the past 3 weeks, juju / maas has been gettin in my way
<cnf> not helping me
<bdx> cnf: sorry to hear that, let me know if you need some guidance
<cnf> to be quite honest, if i can't get something working soon, my advice here will be to NOT use juju / canonical for openstack deployment
<cnf> bdx: feel free to start with https://bugs.launchpad.net/juju/+bug/1674759
<mup> Bug #1674759: juju upgrade-juju doesn't honor proxy settings <juju:New> <https://launchpad.net/bugs/1674759>
<bdx> kklimonda: so if you are deploying kubernetes to lxd, you are probably just looking for a POC anyway right?
<kklimonda> bdx: well, conjure-up seems like a pretty cool POC/lab deployment tool
<kklimonda> I don't deny that :)
<cnf> balloons: thanks for the help so far, i'm calling it a day
<cnf> i'm tired, frustrated and hungry
<cnf> balloons: not a good state to deal with this :P
<balloons> cnf, you are most welcome. Sorry to hear about your troubles :-(
<bdx> kklimonda: I can't speak for others, or for how the tool is intended to be used, but I find it most useful for getting my initial stack deployed, then I switch over to pure juju for the rest of the lifecycle ops
<bdx> cnf: did you try `juju upgrade-juju --build-agent`
<rick_h> bdx: seems like a legit bug in the upgrade command not able to go through the proxy that's set
<rick_h> bdx: and from a osx client can't provide agents for ubuntu xenial
<bdx> rick_h: ahh .. I forgot the osx thing , darn
<rick_h> bdx: yea, nasty bug basically
<bdx> ok then
<bdx> shoot
<stormmore> o/ juju world
<andrew-ii> bdx: you mentioned a good book to read? One that would make sense with version 2?
<marcoceppi> stokachu cory_fu any progress on osx + conjure?
<bdx> andrew-ii: I mentioned a book?
<andrew-ii> Might have been a figure of speech
<andrew-ii> Basically, I jumped into maas/juju full hog back before 2 was released, and never really got a cloud up. So it's been hard to get started.
<andrew-ii> I've tried conjure-up, but I'm just too ignorant to troubleshoot it, I think
<andrew-ii> So I'm slowly building openstack up manually, and that's been really informative
<bdx> andrew-ii: yeah ... that is probably the biggest backwards whale there is ... juju makes openstack really niceee, but you have to know a bit about openstack for it all to make sense
<bdx> andrew-ii: you should imagine juju as something that takes away a lot of the pain points, that being said, you still need to be entirely and overly familiar with the actual cloud substrate if you plan on supporting/operating/maintaining it to any degree
<andrew-ii> I've certainly felt that :)
<bdx> which it sounds like you are touching up in some of those areas for sure
<andrew-ii> You'd be amazed how much you can learn by ramming your head against an immovable object for a while
<bdx> haha ... right ... some may call it psychotic, but I'll stand by it
<andrew-ii> It's been a blast
<andrew-ii> Completely useless
<andrew-ii> But fun
<andrew-ii> Though it looks like I'll be able to use it for a simple test environment, so I'm actually excited to see that work!
<bdx> andrew-ii: have you gotten a POC deploy up via conjure-up or the openstack-base-bundle / openstack-lxd-base-bundle?
<andrew-ii> Every time I tried conjure-up, I ended up reinstalling the maas controller
<bdx> andrew-ii: I see. Did you find out why that was happening?
<andrew-ii> So I was following the setup on one of the Juju dev blogs, and it has a lot of network config
<andrew-ii> It seems like conjure-up didn't really respect that and, combined with some strange dns/routing issues, just sorta confused it
<andrew-ii> I'm sure it was recoverable, but I didn't know how or what was *really* wrong
<bdx> andrew-ii: yeah, (are you reading Dimiter's blog?) a good amount of that has been simplified/made possibly from the maas gui now
<andrew-ii> Yes!
<andrew-ii> The new maas 2 interface is a joy
<andrew-ii> And that has greatly alieviated a bunch of the rebuild/configuration pain
<bdx> andrew-ii: fair enough - right
<andrew-ii> And I've almost gotten openstack manually deployed
<andrew-ii> It just hung on some relations that... well, should have been ok
<bdx> ooooh, you are building it manually via juju
<andrew-ii> So I'm rebuilding it tonight to see what I missed the first time in my config
<bdx> lol
<bdx> ok
<bdx> gotcha
<andrew-ii> Yeah - one of my machines seems to be slightly junk
<andrew-ii> Since it won't always be able to talk to the cloud images
<andrew-ii> So I need to do a lot of checks before I can let a bundle loose, or it just sorta stalls
<bdx> hmmm .... I see
<andrew-ii> I'll admit, it's weird
<andrew-ii> One machine basically only starts LXD containers once I destroy it once and rebootstrap
<bdx> as opposed to?
<andrew-ii> Seems there's some screwy TLS handshake issue that won't work because I think I have a dead network card
<andrew-ii> well, one or two of the network connections is sligthly garbage - both trouble machines are SuperMicro
<andrew-ii> Bonding helped, but not perfectly
<bdx> lol oh man
<bdx> yeah ... that will be a stick in your spokes for sure
<andrew-ii> Basically, it seems to be a hilarious series of nonsensical bumps :P
<bdx> aweee, I'm sorry man ... I know how that goes ... just gotta take the good with the bad and roll on
<hatch> Hi everyone, we're aware of a service outage on jujucharms.com and are currently working quickly on a resolution.
<andrew-ii> Ain't no thing. I'm pretty zen about it
<andrew-ii> I knew going in that bucking for the 2.0+ releases was riding the bleeding edge, and so naturally I got a bit cut
<bdx> andrew-ii: when I've been in similar situations, I create a working source of truth and make small incremental changes from that starting point
<bdx> yeah
<andrew-ii> MAAS is stable and seems great now, so now I'm just learning juju, and soon OpenStack :)
<bdx> there ya go .... the best thing being, if/when you get stuck you know you have a whole community of engineers here to lend a hand - keep up the good work!
<hatch>  jujucharms.com outage has been resolved, thanks for your patience :)
<magicaltrout> I AM NOT PATIENT!
<magicaltrout> but i didn't notice
<magicaltrout> so i figure its okay
<hatch> lol
<hatch> that's the best kind of outage
<stormmore> lazyPower, I think I finally broke my AWS cluster badly
<magicaltrout> try switching it off and on again
<magicaltrout> or take the floppy disk out
<stormmore> magicaltrout, I did that made things worse :P
<magicaltrout> did you defragment the drive? or mark the bad sectors? :P
<stormmore> :P doubt that is the problem magicaltrout ... getting told by the master that it is waiting for kube-system pods to start and the workers are waiting for kubectl!
<magicaltrout> ah nice
<stormmore> at this point I am thinking about destroying and recreating it but wanted to know if lazyPower would like to get some failure data before I actually do destroy the cluster
<magicaltrout> when you say waiting for kubectrl you mean the executable isn't available?
<stormmore> not sure at this point, "Waiting for kubelet to start." is what juju status says about the node
<magicaltrout> oh
<tvansteenburgh> stormmore: juju ssh to the worker node, then `journalctl -u kubelet`
<stormmore> well for the workers. "Waiting for kube-system pods to start" is what the state is for the master
<tvansteenburgh> journalctl should have some info in there about why kubelet won't start
<tvansteenburgh> stormmore: for the master, juju debug-log --replay -i unit-kubernetes-master-0 # pastebin that somewhere
<stormmore> tvansteenburgh, http://paste.ubuntu.com/24224523/ the journalctl -u kubectl from a worker
<tvansteenburgh> that's the whole thing?!
<stormmore> no but it gets into a loop by the looks of it
<tvansteenburgh> that last error msg might be relevant but it's truncated
<tvansteenburgh> stormmore: i gotta step away for a bit, bbl
<stormmore> no worries, yeah I though that... might try and remove that deployment but not sure how I can without the cluster being healthier
<stormmore> I am about to step away and take this home with me
<Budgie^Smore> ok I am home
<stokachu> kklimonda: conjure-up is way more than a poc tool
<stokachu> kklimonda: you can deploy to localhost if you want to
<stokachu> kklimonda: but you aren't limited to just localhost
<stokachu> you dont have to alter lxd profiles etc when deploying kubernetes to aws
<stokachu> so not knowing about lxd profiles won't stop you in learning how to deploy kubernetes
<stokachu> and if you have all day to read through 5 pages of documentation to setup a cluster then feel free
#juju 2017-03-22
<andrew-ii> I have an LXD container that seems to be live, but juju shows it as pending.
<andrew-ii> I'm a bit at a loss for why. The logs don't seem to show anything wrong, I think.
<andrew-ii> `juju ssh 0` followed by `sudo lxc list` shows it has IPV4 addresses, but it doesn't show anything before with `juju status`
<andrew-ii> maas has the new container's two IP addresses shown like the machine's lxc command, but juju doesn't seem to be aware
<anastasiamac> andrew-ii: does `juju status --format=yaml` show error?
<andrew-ii> I don't think so. Just 'juju-status: current: pending' and 'machine-status: current: running'
<andrew-ii> I feel like maybe it's related to maas rack HA, since a few days ago I added a second rack controller (but didn't work on it until today)
<andrew-ii> This is with a juju 2.1.2 controller (fresh)
<andrew-ii> Oh and maas 2.1.3
<anastasiamac> andrew-ii: m not sure what's going on.. and there is nothing in Juju or MAAS logs?
<andrew-ii> Not that I saw... I'll check again, though.
<anastasiamac> andrew-ii: and u r not using kvm on maas?
<andrew-ii> No, should only be lxd, I think
<anastasiamac> andrew-ii: file a bug, include ur bootstrap/deploy commands, logs and --format=yaml status output :) any other info like MAAS setup, network topology, etc as u see fit
<andrew-ii> Alrighty. I was assuming it was something screwy with how I set it up
<andrew-ii> Should I try to rebuild the juju controller from an earlier version?
<anastasiamac> if u can confirm if it works with earlier version would b awesome \o/ u could also try later Juju version - 2.2: alpha1 or daily snap...
<anastasiamac> it mayb a problem with just 2.1.x
<andrew-ii> Alrighty. Lemme gather the logs and such
<andrew-ii> Err. Dumb questions asked poorly: how do I use root with `juju scp -- -r root@0:/var/log/juju ./machine-0-juju-logs`?
<kwmonroe> andrew-ii: juju run --unit <foo>/0 'cp -a /var/log/juju /home/ubuntu/ridiculous && chown -R ubuntu /home/ubuntu/ridiculous'
<kwmonroe> andrew-ii: juju scp -- -r cwr/0:ridiculous /tmp
<kwmonroe> it's not pretty andrew-ii, and i can't belive juju scp doesn't work in root context, but it is what it is.
<kwmonroe> oh, i said "cwr" in that scp command.. i meant "<foo>", as in, whatever unit you want to scp from.  i just happen to test this in an env with cwr/0 deployed.
<ybaumy> i just learned about the existence of foreman. does anyone have experience what is better? juju or foreman
<zeestrat> ybaumy: I'd compare it more to MAAS. Comes from Red Hat and has been around for a while. Works well with Puppet
<kjackal> Good morning Juju world
<BlackDex> Hello there. Is it possible to have an bootstrap node in LXD for usage with MAAS?
<BlackDex> so i have a maas node, and i create an LXD container on that same node to be the bootstrap node?
<cnf> BlackDex: i used kvm
<BlackDex> i do that also
<BlackDex> but i wonderd if i can skip the v in kvm ;)
<BlackDex> because of performance
<BlackDex> maybe i can do it by first creating a LXD container, and adding that to MAAS
<BlackDex> giving it the correct tags and tell juju to use that
<cnf> i don't think you can add lxd to maas?
<BlackDex> yea you can :)
<BlackDex> MAAS Supports LXD :)
<andrew-ii> kwmonroe: thanks! The command makes sense and works fine
<stub> What hook gets called if I 'juju attach' a resource to an application?
<stub> I'm guessing config-changed ?
<stub> nope, upgrade-charm according to docs.
<marcoceppi> stub: correct, upgrade-charm.
<tvansteenburgh> jamespage: where is the ganglia charm repo?
<tvansteenburgh> the charm points to https://code.launchpad.net/~charmers/charms/trusty/ganglia/trunk but that hasn't been touched in a long time
<jamespage> tvansteenburgh: https://github.com/ganglia-charms
<andrew-ii> Another DQAP: Can I deploy onto a controller?
<andrew-ii> Mostly because `juju deploy openvpn` is just so convenient...
<andrew-ii> Granted, I suspect it adds a ton of crazy to the model, but I've seen references to people finagling odd setups and figured I'd ask.
<cnf> morning
<cnf> balloons: poke?
<marcoceppi> andrew-ii: you can certainly try, there shouldn't be much collision, but it's something you'll want to test first
<marcoceppi> andrew-ii: juju deploy --to 0 -m controller openvpen
<balloons> cnf, howdy
<cnf> ohai! \o
<cnf> balloons: so i'm back it :P
<cnf> did you see https://bugs.launchpad.net/juju/+bug/1674759 ?
<mup> Bug #1674759: juju upgrade-juju doesn't honor proxy settings <juju:Incomplete> <https://launchpad.net/bugs/1674759>
<cnf> anastasiamac: also poke :P
<balloons> cnf, looks like it would be worth trying the unstable 2.2 and see if it things are better
<cnf> balloons: cn i upgrade to 2.1.2 with a 2.2 client?
<cnf> or are you/they suggesting upgrading the controller to 2.2?
<cnf> it's a PoC, so i don't mind overly much
<balloons> cnf, you could use upgrade to go backwards, since there wouldn't be anything newer if you did try 2.2-alpah1
<balloons> But yea, it would be a new controller
<cnf> $ ./juju upgrade-juju
<cnf> no prepackaged tools available, using local agent binary 2.2-alpha1.1
<cnf> ERROR no matching tools available
<cnf> updated the issue as well
<cnf> balloons: i'm tempted to just delete the controller, and bootstrap a new one
<cnf> but i will probably run into this again, then
<balloons> cnf, just changing the client won't change things indeed. You'd have to bootstrap a new controller
<cnf> hmm, so my only option is to bootstrap a new controller
<cnf> hmm
<balloons> as far as verifying if the newer juju fixes the bug yes
<balloons> Or bootstrapping a newer controller and doing model migration
<cnf> hmm
<cnf> i might as well throw everything away, and start new
<balloons> lxd might be useful here as well
<cnf> balloons: how so?
<balloons> if you lack the physical machines
<cnf> can i deploy a controller to MAAS using LXD?
<balloons> I think actually the easier way to do it is to add a vm using virsh on the maas controller for the juju controller. That's a better way to double dip I think
<cnf> that's what i have now
<cnf> the juju controller is a KVM
<balloons> ahh, awesome
<balloons> so can you simply add another kvm?
<cnf> on an ESXi vm...
<balloons> it would only have to live long enough to migrate your workload. In the interest of checking the bug, you migrate, then attempt to upgrade
<cnf> well, "simply"
<cnf> but i have nothing deployed atm
<cnf> so i might as well just trash this one, and bootstrap again
<balloons> ack
<cnf> it just has me worried i'll run into it again
<cnf> hmm
<cnf> also, if i do this, i can no longer help debug the problem
<cnf> anyone know when anastasiamac comes online?
<balloons> cnf, about 7 hours from now
<marcoceppi> cnf: she's in NZ timezone
<cnf> hmm, that's a bummer :P
<balloons> cnf, presumably you could recreate easily enough
<kwmonroe> cory_fu: i don't know enough puppet to grok the syntax on https://issues.apache.org/jira/browse/BIGTOP-2708, but i *think* we're ok because we used role-based bigtop (and not component-based, which is what this is patching).
<kwmonroe> having said that, i shall deploy zeppelin and see what happens :)
<cnf> hmm, crap, i used the wrong user for the controller again
<cnf> can I cswitch the maas suer the juju controller uses?
<cnf> hmm, why is juju status --output=yaml empty?
<cnf> juju status shows me plenty
<cnf> hmz
<cnf> wtf
<cnf> ugh, i really don't know how to debug this stuff...
<cnf> jamespage: what was the right incantation?
<jamespage> cnf: --format
<jamespage> cnf: --format=yaml
<cnf> oh
<cnf> i'm blind :P
<cnf> so now to find why this isn't working, still :(
<cnf> jamespage: http://termbin.com/vz8q see anything obvious?
<cnf> juju seems to just break shit, and then stop doing stuff
<cnf> wth :(
<cnf> maas brings up the machine with all the right IP's
<cnf> juju changes the IP's around to bridge interfaces, except doesn't do it right
<cnf> and then fails
<cnf> i think :(
<cnf> can anyone help with this?
<jamespage> jam: any chance you can caste your eyes over cnf's problem above?
<jamespage> appears to be some sort of network-space device binding allocation lxd type problem
<cnf> seems every time i file a big, and get around it, something else pops up
<cnf> jamespage: btw, that person has not called me yet ^^;
<cnf> dno if you made the link to the mails etc, but we met at cfgmgmtcamp
<jamespage> cnf: yep I gotcha
<ybaumy> jamespage: nice that you are here. maybe you can help. is the following right to do. remember i set the root password for the mysql charm already. now i want .. juju add-machine -n2 ; juju add-unit mysql --to machine1 ; juju add-unit hacluster --to machine2 ; juju add-relation mysql/0 hacluster/0 ; juju add-relation mysql/1 hacluster/0 ???
<cnf> so jam is the person i need for this, right?
<ybaumy> or is it juju deploy hacluster --to machine2 i guess
<cnf> hmm
<cnf> i'm at a loss here
<Budgie^Smore> lazyPower did you read the backlog in channel?
<lazyPower> Budgie^Smore: i did see that you encountered some failure scenario(s) in AWS
<ybaumy> cnf: i feel you
<Budgie^Smore> lazyPower yup, I need to resize the instances anyway so am going to do a clean install rather than salvage this on but wondered if your team would like any failure data before I destroyed the cluster
<kwmonroe> cnf: i'm pretty sure jam is in UTC+<a couple hours>.  you may have more luck pasting http://termbin.com/vz8q in #openstack-charms.
<cnf> kwmonroe: i think it's a juju issue, but no harm, i guess
<kwmonroe> cnf: i'm not familiar enough with network space binding to offer much help -- hopefully #openstack-charms has the people with the right know-how for ya.
<cnf> i hope so
<cnf> i'm getting quite dispirited...
<lazyPower> Budgie^Smore: oh heck yeah
<lazyPower> Budgie^Smore: can you get us a juju-crashdump of the model?
<lazyPower> Budgie^Smore: snap install juju-crashdump --edge && juju-crashdump. I can get you a secure upload point again
<lazyPower> need a sec tho i'm in an openstack meeting
<Budgie^Smore> yeah I need a min or two to get my laptop up and running, etc.
<Budgie^Smore> make that a few more than than that since I need more coffee
<Budgie^Smore> ok I haven't used snap before and getting an error about requiring classic or confinement override
<jamespage> kwmonroe: I'm already a bit stumped on what the problem is with cnf's deployment
<cnf> jamespage: bit more reference http://termbin.com/ep44 and http://termbin.com/6o0h
<jamespage> ybaumy: kinda - adding units is nearly as you describe
<cnf> maas brings it up fine, then juju tries to make the bridges
<jamespage> you'll want todo something like juju add-unit --to lxd:<physical machines id> mysql
<jamespage> with regards to hacluster
<jamespage> ybaumy: juju deploy hacluster hacluster-mysql
<jamespage> juju add-relation hacluster-mysql mysql
<Budgie^Smore> lazyPower I added --classic to that snap command
<jamespage> ybaumy: hacluster is a subordinate charm so to make units of it, you relate it to a principle charm like percona-cluster
<jamespage> or glance or cinder or nova-cloud-controller
<ybaumy> ok
<ybaumy> gonna make a backup before i try that
<zeestrat> jamespage: Do y'all have a reference bundle for OpenStack HA laying around besides https://launchpadlibrarian.net/298175262/bundle.yaml?
<zeestrat> If not, then that might help you out, ybaumy.
<lazyPower> Budgie^Smore: ah good plan, i forgot that flag
<lazyPower> Budgie^Smore: sorry, this is what happens when i triple-task.... :\
<ybaumy> zeestrat: i dont want to start over at that point ..
<Budgie^Smore> no worries, being able to figure out issues like that is makes me good at what I do
<ybaumy> zeestrat: i know of that template you pasted it before for me
<ybaumy> zeestrat: but i have a setup and dont want to start over every 2 days
<ybaumy> zeestrat: i already scripted alot to fit the current environment i dont know if the scripts will work with that new setup then
<zeestrat> ybaumy: My bad. Memory is not what it used to be!
<Budgie^Smore> getting a bunch of runtime/cgo: pthread_create failed: Resource temporarily unavailable
<ybaumy> zeestrat: yesterday i got a deadline for the POC. and i now have one month to show them that ubuntu/juju/maas is the platform we want to use. else i have to start with redhat or suse
<Budgie^Smore> getting a bunch of "runtime/cgo: pthread_create failed: Resource temporarily unavailable:" errors running that command
<ybaumy> so i need some support
<lazyPower> Budgie^Smore: thats emitting from snapd
<lazyPower> Budgie^Smore: so long as the process hasn't returned its still tarballing up the relevant bits of the model and preparing a dump package
<lazyPower> Budgie^Smore: i encourage you to extract it and take a quick peek before washing hands, this app dumps the full state of a "crashed model" for post analysis, this may be something you want to have in your toolkit as you're building a pretty heady k8s model right?
<Budgie^Smore> lazyPower, where did you want this ran? I assumed on the client that was connecting to the client but I am thinking now you meant a different system since I got told it needs to run as root but root isn't connected to the controller
<lazyPower> cory_fu: ^ needs to run as root? huh?
<lazyPower> Budgie^Smore: hang on cc'ing a stakeholder on the project
<lazyPower> Budgie^Smore: but no, it should be run as your user on your client workstation, that much is correct.
<Budgie^Smore> lazyPower oh I was intending on holding onto a copy to look at myself too as yes I want to at least try and avoid this scenario but I need to upgrade the underlying infra anyway and always like to start from a known good state
<cory_fu> lazyPower: Trying to get caught up on the backscroll, but what run as root?
<cnf> no one that has an idea what is going on here http://termbin.com/vz8q ?
<lazyPower> cory_fu: ah, this was returned from juju-crashdump for Budgie^Smore
<lazyPower> cory_fu: i was instructing budgie to create a crashdump archive of the model before destroying it so we could do some post-analysis of what went sideways during model deltas.
<lazyPower> and apparently its complaining it needs to be run as root? is this new behavior? I've not seen this before.
<cory_fu> lazyPower: Hrm.  crashdump shouldn't require root on the local machine, and anything it does on the remote machine should be done with `juju run` so it should have root (I think)
<Budgie^Smore> lazyPower, cory_fu here is the ful output from juju-crashdump: http://paste.ubuntu.com/24229662/
<cory_fu> lutostag: ^
<ybaumy> cnf: alot of waiting for machine... what does maas say?
<cnf> ybaumy: all machines up
<cnf> it's only "waiting" on lxd containers
<lazyPower> oh that bubbled up from snapd again. thats so weird.
<cnf> juju is fucking up the networking
<lazyPower> Budgie^Smore: did you get a crashdump package in $PWD?
<cory_fu> Yeah, I have no idea what those messages are about
<cnf> ybaumy: http://termbin.com/ep44 and http://termbin.com/6o0h
<Budgie^Smore> lazyPower don't see any files created :-/
<cnf> maas makes the net devices, and adds the right IPs
<lazyPower> :| thats no bueno, it should have left a crashdump-$datestamp package in $PWD.... ok lets get lutostag in on this one to help if he's around
<cnf> juju then tries to create bridges, and move the ip's
<cnf> for some reason it fails
<lazyPower> thanks cory_fu for taking a look.
<cnf> and then complains it can't find ips
<ybaumy> cnf: hmm that looks like a mess. i wouldnt know where to start too sorry
<ybaumy> which vlan is the public ip
<ybaumy> for the lxd containers
<Budgie^Smore> lazyPower, lutostag for the record here is the command I used to install juju-crashdump: $ sudo snap install juju-crashdump --edge --classic
<cnf> you get BridgeName:br-enp3s0f0.4013}] devices on host "machine-0" for container "machine-0-lxd-7" with delay=0, acquiring lock "machine-lock" to-bridge="enp3s0f0.4013" --activate --bridge-prefix=br-  --reconfigure-delay=0 /etc/network/interfaces <<'EOF'
<cnf> and then a few blank lines
<cnf> and then find host bridge for space(s) "space-public" for container "0/lxd/7"), retrying in 10s (3 more attempts)
<cory_fu> lazyPower: lutostag has much more understanding of crashdump.  I just use it from time to time.  ;)
<cnf> and the same thing again
<cnf> >,<
<lazyPower> cory_fu: i still associate you as a stakeholder since you pimp it so much :)
<lazyPower> and by extension, i now pimp it just as much
<Budgie^Smore> lol
<ybaumy> you are using maas dhcp right on that vlan?
<Budgie^Smore> well apparently you are pimping an STI right now :P
<lazyPower> O_O
<cnf> ybaumy: it's not dhcp, it's maas assigned
<lazyPower> Budgie^Smore: if you snap list, do you have 1.0.0 of juju-crashdump installed?
<jamespage> cnf: bridges not appearing right?
<jamespage> cnf: beisner just highlighted this problem to me - https://bugs.launchpad.net/juju/+bug/1672327
<mup> Bug #1672327: Too long names for bridges <juju:Triaged> <MAAS:Triaged> <https://launchpad.net/bugs/1672327>
<Budgie^Smore> lazyPower hey that was the cleanest of the jokes that flashed through my head... anyway http://paste.ubuntu.com/24229699/
<cnf> jamespage: well, i'll be...
<jamespage> cnf: those where not my words...
<jamespage> damn ivoks is not here...
<cnf> why do I have to run into every fucking weird bug with this?
<jamespage> Dmitrii-Sh: re bug https://bugs.launchpad.net/juju/+bug/1672327
<mup> Bug #1672327: Too long names for bridges <juju:Triaged> <MAAS:Triaged> <https://launchpad.net/bugs/1672327>
<jamespage> Dmitrii-Sh: did you and ivoks figure out a workaround for that?
<Budgie^Smore> cnf, welcome to the club, just ask lazyPower how many weird issues I have hit
<ybaumy> jamespage: isnt it possible to use udev for that?
<Dmitrii-Sh> jamespage: as a workaround we just renamed the interfaces to something really short
<cnf> Dmitrii-Sh: renamed them where?
<cnf> in maas?
<Dmitrii-Sh> jamespage: e.g. instead of en1s0 use e0
<Dmitrii-Sh> cnf: yes
<cnf> hmm, ok
<cnf> i'm fine with using eth0 and eth1, ffs
<jamespage> cnf: we all where :-)
<Dmitrii-Sh> cnf: the problem is that UAPI kernel headers have a byte limit of 15
<Dmitrii-Sh> cnf: same with libc
<cnf> <very quick rant> stupid ass retarded piece of shit systemd!</rant over>
<Dmitrii-Sh> cnf: :^)
<cnf> ok
<cnf> jamespage: well, i'd say both maas and juju should guard against this
<cnf> instead of just silently failing
<jamespage> don't disagree
<lazyPower> yeah, thats all the same here too Budgie^Smore not sure why its tanking :( this bums me out
<Dmitrii-Sh> cnf: links to the kernel code and libc are in the comment if needed
<cnf> ok, so destroying the setup
<cnf> changing network names
<cnf> and  deploying again
<Dmitrii-Sh> cnf: y
<Budgie^Smore> lazyPower sorry :-/ the sarcastic side of me has a few choice "jokes" for the devs right now ;-)
<Budgie^Smore> lazyPower hey at least this one isn't your fault :)
<lazyPower> Budgie^Smore: do what you gotta do man ;) i can filter appropriately
<cnf> jamespage: asked a maas question over in #maas
<lazyPower> ikr!
<lazyPower> for a change, i dont feel totally responsible
<jamespage> Dmitrii-Sh: how exactly did we do the renames for the interfaces?
<Budgie^Smore> lazyPower nope I won't got to that dark place, I am reforming ops guy :P
<ybaumy> i dont know much about that but cant you just modify udev network rules every first boot to rename interfaces
<ybaumy> thats what i would do
<cnf> ybaumy: modify them how?
<cnf> ybaumy: this is all automated deploys
<jamespage> ybaumy: that's pretty much exactly what maas will do on deployment
<Dmitrii-Sh> jamespage: in short: I don't have a script for it yet. I just renamed them via maas gui. Changed interface names to 2-byte names and then redeployed. Any VLAN interfaces are updated automatically by maas
<cnf> Dmitrii-Sh: indeed
<ybaumy> jamespage: ah ok
<cnf> i'm collecting a LONG lust of maas and juju bugs  :/
<jamespage> cnf: hearing you
<ybaumy> jamespage: i would have modified cloud-init but thats probably what maas does too
<cnf> Dmitrii-Sh: do you know how to get back interfaces you deleted in MAAS?
<cnf> Dmitrii-Sh: assuming you don't know the MAC addresses anymore :P
<jamespage> cnf: I suspect if you recommission in the machine, they will get re-discovered - but I'd defer to those with superior MAAS knowledge to me
<cnf> k
<Dmitrii-Sh> cnf: better to recommission them, yes. That should boot an ephemeral ubuntu image via PXE get hw data again and bail out
<cnf> so br-bond0.4011 is fine, at least
<cnf> so anyone know how i can change the user a juju controller uses?
<cnf> to talk to maas, that is
<jamespage> you have to switch the credential being used... I'd have to dig docs
<ybaumy> cnf: ~/.local/share/juju there is a credentials file
<Budgie^Smore> juju add-credential with the same id for the credential will update it
<cnf> ybaumy: yes, which isn't used after bootstrap
<ybaumy> cnf: last time i just edited that
<ybaumy> or like Budgie^Smore says
<cnf> as far as I can see, that doesn't get used after bootstrap
<Budgie^Smore> it should be used anytime juju needs to get cloud resources
<cnf> it isn;t
<ybaumy> i thought so too
<cnf> try it
<cnf> bootstrap with one user
<cnf> then change local credentials
<cnf> and then deploy something
<cnf> they'll be added to maas with the original credentials
<ybaumy> hmm
<ybaumy> then you have to use the command
<ybaumy> sorry then
<ybaumy> cant try right now. backing up everything
<jamespage> cnf: I think you should be able todo it with 'juju update-credential'
<cnf> jjuju update-credential yeah
<cnf> jamespage: that works, indeed
<cnf> jamespage: doesn't change what is already running, though :P
<cnf> but it is something, thanks
<jamespage> cnf: in terms of how that's allocated in maas
<jamespage> no I'd not expect it todo that
<Budgie^Smore> ok cnf, that is the part that I don't get
<cnf> ok, while those HP machines are booting, i'm going for a shower, and look for food
<cnf> Budgie^Smore: what?
<jamespage> cnf: I'm also concerned you might island existing allocated ressources.
<ybaumy> cnf: you cant change the owner of a added ressource in maas once its added i guess
<Budgie^Smore> cnf when you say change what is already running
<cnf> jamespage: yeah, and it's the controller :P
<cnf> lets see if we can get openstack working, i'll worry about ACL's and accounts later
<cnf> bbiab
<cnf> jamespage: thanks so far
<Budgie^Smore> cnf I probably missed it but what are you trying to accomplish after you change the credentials juju uses?
<cnf> Budgie^Smore: juju should use a juju user in talking to maas
<cnf> i was loged in with my admin user
<Budgie^Smore> cnf, ok I think there maybe a misunderstanding in the difference of cloud credential and juju user here
<ybaumy> a juju user is just a user of a cloud but not the cloud user
<ybaumy> right?
<Budgie^Smore> cnf, they are 2 separate ACL systems, reason being clouds (including MaaS) auth APIs are different and the juju user is for interfacing with juju
<Budgie^Smore> cnf, for example for AWS it isn't even a username and password you pass to authenticate, it is 2 keys which are associated with a user account on Amazon's end
<sfeole> petevg, ping you around?
<petevg> stroke: i am.  What's up?
<petevg> sfeole: silly autocorrect munger your name, though.
<sfeole> petevg, hey, i wanted to use some exit handlers in libjuju, one that i looked at was atexit,  But i don't think I can properly utilize that handler with asyncio
<petevg> sfeole: there are usually asyncio equivalents for that sort of thing.
<sfeole> petevg, ahh there are
<sfeole> petevg, i'll take a look then
<sfeole> petevg, i want to simply destroy a model upon exit
<petevg> sfeole: cool. Ping me if you want to bounce anything off of me.
<sfeole> petevg, sounds good
<cnf> Budgie^Smore: ybaumy so i talk to juju, not to the cloud
<cnf> Budgie^Smore: ybaumy so i should not HAVE cloud credentials, juju should
<Budgie^Smore> cnf what I haven't tried is if you can talk to juju without being able to talk to the cloud (or MaaS) but I have tried logging in as multiple users to juju from the same machine so that both juju accounts had access to my cloud credentials
<cnf> Budgie^Smore: well, if you have several users talking to maas, that gets a mess
<cnf> because each need their own cloud credentials?
<cnf> then you need to mirror ACLs
<cnf> that will become a MESS very very fast...
<Budgie^Smore> cnf and in the juju world, models are "owned" by users and you have to give other users access the model
<cnf> Budgie^Smore: right
<cnf> Budgie^Smore: but then do the other users also need access to the same underlying cloud resources?
<cnf> and what if they don't match?
<cnf> so i expect the juju controller to use one user / api key / whstever
<cnf> and deal with juju user itself
<Budgie^Smore> cnf oh I kinda get that but from a security stand point you want all your ops people to have their own account on both the cloud and juju. this is more important in public clouds though where you already have to set people up there separate from juju
<cnf> Budgie^Smore: no, no you really don't
<cnf> Budgie^Smore: because ACLs on both will NOT match up
<cnf> they won't have the same mechanisms
<cnf> ( i mean, you want them to have a cloud account to deal with cloud issues, but not to talk to the cloud through juju)
<Budgie^Smore> cnf and there is why you see why they are separate in juju to start with
<cnf> Budgie^Smore: what?
<cnf> a juju user should NOT! need a cloud account to talk to juju and do stuff
<Budgie^Smore> no you want them to use their cloud credentials through juju especially since otherwise you wouldn't be able to tell actions apart
<cnf> no, no you really don't...
<cnf> that will be a mess
<cnf> it just can't work
<Budgie^Smore> cnf how are you going to get instances spun up with out clound access?
<cnf> Budgie^Smore: JUJU should have cloud access
<cnf> not the juju user
<cnf> this is how you set up vnf controllers...
<Budgie^Smore> cnf how are you going to tell apart who did what through juju to the cloud?
<cnf> it just doesn't work if you need credentials all the way down the line...
<cnf> Budgie^Smore: in juju...,
<cnf> that is why you have a juju controller...
<cnf> Budgie^Smore: say i make a budgie/default model, which you have access to
<cnf> you need a budgie user on the cloud
<Budgie^Smore> cnf but that is not how it works nor would any infosec team I know sign off on it working like that. yes you can use juju to determine some of that but infosec onces to be able to see it at every level and shared access at any level is frowned upon significantly
<cnf> then i give bob access to budgie/default
<cnf> now the bob user on the cloud needs access to your resources on the cloud
<cnf> but the cloud has no granularity, so he has access to ALL your resources?
<cnf> Budgie^Smore: uhm, sure infosec would sign off on it, why would they not?
<cnf> that's nonsense
<cnf> that's how orchestration works
<cnf> needing credentials on every single step along the way is unmaintainable
<cnf> and as such a security nightmare
<cnf> it just does NOT work or scale in any form whatsoever
<Budgie^Smore> cnf I would love to get into the ins and outs of why the design is the way it is and why infosec likes it that way but I do actually have a cluster I need to spin up sorry. your work around is to have a shared account on maas that you give out to everyone
<cnf> Budgie^Smore: infosec doesn't like it your way :P
<cnf> and no way
<cnf> no one is getting cloud access
<cnf> at all
<cnf> not one bit
<cnf> if juju needs that, i don't think i'll happen at $currentclient
<Budgie^Smore> cnf oh I highly doubt that but then running 30% of the world's Internet might make me jaded about what enterprise infosec people want
<Budgie^Smore> cnf on and don't get me started about auditors - internal and external - and their requirements!
<cnf> you run 30% of the worlds internet?
<Budgie^Smore> was part of the company that does until recently
<cnf> sure
<cnf> jamespage: now i;m at message: 'can''t get info for image ''juju/xenial/amd64'': not found' :P
<Budgie^Smore>  http://www.reuters.com/article/us-akamai-tech-results-idUSKBN0NJ2IV20150428 - "Akamai, which delivers between 15-30 percent of all Web traffic"
<Budgie^Smore> cnf if there is one thing I never lie about it is what I have done in my career. hell it might come across as bragging but truth is it still blows me away what I have accomplished and who I have worked for over the years
<Budgie^Smore> think I am going to "test" out JaaS to deploy the new cluster
<rick_h> Budgie^Smore: let us know if you hit anything.
<Budgie^Smore> rick_h: ack that
<rick_h> Budgie^Smore: make sure to login to the website first to make sure your account is ready to go
<Budgie^Smore> rick_h: I just logged in through the juju interface before I started modelling right?
<rick_h> Budgie^Smore: rgr
<Budgie^Smore> rick_h yeah I figured that would be a good first step :)
<Budgie^Smore> rick_h I have "played" with the demo for years
<rick_h> Budgie^Smore: sec, let me get you the in dev docs branch as well
<rick_h> crash course!
<rick_h> Budgie^Smore:https://github.com/juju/docs/blob/jaas/src/en/getting-started-jaas.md
<rick_h> Budgie^Smore: hah, well the "demo" is going to get more fun for you this time
<Budgie^Smore> rick_h come on now, I am a man and you expect me to read a manual ;-) (side not, would be better if the images actually weren't broken links :P)
<rick_h> Budgie^Smore: yea, once the branch lands and it's rendered on jujucharms.com/docs it'll be pretty and themed and such
<rick_h> Budgie^Smore: just some in-flight stuff as it goes through reviews/etc
<Budgie^Smore> rick_h yeah I get that :) just giving you crap
<rick_h> Budgie^Smore: bring it on! :P
<Budgie^Smore> rick_h I could come work with you as a colleague and not just a user :P
<cnf> jamespage: so things are still not coming up
<cnf> i think i am missing relatkons?
<Budgie^Smore> I am pondering adding charmscaler to this cluster
<Budgie^Smore> wonder how well it would work in AWS though
<rick_h> Budgie^Smore: :) https://www.canonical.com/careers
<anastasiamac> cnf: m almost here now... it's only just before 6am :D how can I help?
<Budgie^Smore> rick_h, *cough* no comment, I am taking the 5th *cough*
<cnf> anastasiamac: https://bugs.launchpad.net/juju/+bug/1674759 but i just redeployed :/
<mup> Bug #1674759: juju upgrade-juju doesn't honor proxy settings <juju:Incomplete> <https://launchpad.net/bugs/1674759>
<cnf> anastasiamac: so take your time :P
<cnf> it's almost 21:00 here anyway :P
<cnf> now i'm stuck on the next issue :/
<Budgie^Smore> rick_h quick question, is it possible to modify the constraints after you have created a machine in an model or is that something that is only done when addming a machine?
<rick_h> Budgie^Smore: yea, only done when adding a machine. If you set them on an application level you can change the constraints and new units pick up the new values
<Budgie^Smore> rick_h oh I get that, just makes it kinda tricky to use bundles where you might want different constraints than default
<Budgie^Smore> rick_h suppose I could just download the model, modify it to what I want and import it back?
<rick_h> Budgie^Smore: yea, I think the idea is that you'd swap up any constraints in the bundle yourself as that changes what's setup.
<rick_h> Budgie^Smore: at some point we'll allow config/placement/etc overrides during the deploy command
<Budgie^Smore> rick_h ok, follow up question, does the UI have a way of updating the default constraints?
<rick_h> Budgie^Smore: hmm, I'm trying to trace that question. What are the default constraints?
<cnf> and i'm betting jamespage has gone home for the day :P
<Budgie^Smore> rick_h oh that would be nice, kinda like the UI for doing the placement overrides, etc... I can't remember but the last time I left everything alone I got m3.medium instances which is only 4G of mem
<rick_h> Budgie^Smore: so in the GUI you can alter constraints before hitting deploy
<Budgie^Smore> rick_h think that is based on the juju controller memor requirements
<rick_h> Budgie^Smore: I guess I'm not sure which "UI" you're referring and such.
<cnf> and one message: 'can''t get info for image ''juju/xenial/amd64'': not found'
<rick_h> Budgie^Smore: well it's a "default" value. Like anything, it needs to not be too crazy for folks trying/testing/etc and those that are running long running production systems
<cnf> anyone know what that is about?
<Budgie^Smore> rick_h sorry I am a bit old school use UI to mean GUI and CLI for ... well CLI
<rick_h> Budgie^Smore: all good
<Budgie^Smore> Budgie^Smore ok if I should be able to alter the constraints in the GUI, I am not finding where, only iption I get when clicking on the machine is destroy
<Budgie^Smore> rick_h about the only way I can think of doing it would be to destroy the "new" machines and "add" new ones with the constraints I want baked in and then replacing the charms back on to the newly created machines... oh and wow when I have a brain fart I have a brain fart
<rick_h> Budgie^Smore: or download the bundle yaml and go down to the machines and edit the numbers before doing deploy?
<Budgie^Smore> yup
<Budgie^Smore> seems a little counterproductive when the GUI does give the ability to add constraints that it wouldn't have a way to update predeploy because techinical the machine isn't added until after it is deployed
<hatch> Budgie^Smore you are correct - this feature has been on our roadmap BUT if you're feeling adventurous you could file a feature request here: https://github.com/juju/juju-gui/issues  :)
<Budgie^Smore> hatch thanks :) I might do that... good knowing someone else thinks it is worth "fixing"
<hatch> Budgie^Smore in fact https://github.com/juju/juju-gui/issues/1982 :D
<Budgie^Smore> hatch in the meantime I have a "workaround" that will do it anyway
<hatch> good good
<Budgie^Smore> I have liked and am watching the issue
<hatch> great thanks
<tvansteenburgh> rick_h: i need some clarification about resources. i thought a resource was attached to a specific revision of a charm, but that doesn't seem to be the case.
<rick_h> tvansteenburgh: when you do a release you release with a specific revision of a charm and a specific revision of a resource
<rick_h> tvansteenburgh: I'm actually playing with the charm command today and making it part of the release output that it shows what version of the charm and what revision of each resource are in each channel
<tvansteenburgh> rick_h: right, that's what i thought. so the resource revisions sticks with that charm revision
<rick_h> tvansteenburgh: well it's mallable via the release calls
<rick_h> tvansteenburgh: so a single charm revision can have a series of resource revisions over time
<tvansteenburgh> rick_h: i have a situation where old revisions of a charm are deploying with a much newer resource than they were published with, which was unexpected. trying to figure out how that happenend
<rick_h> tvansteenburgh: either a bug or someone did new releases with the charm command and updated resources
<Budgie^Smore>  rick_h do I need to be running juju 2.2 cli to log into JaaS from the cli?
<rick_h> Budgie^Smore: no, 2.x
<Budgie^Smore> rick_h using juju loging jaas?
<Budgie^Smore> rick_h I would get it if I knew where to find the register command I need :)
<hatch> Budgie^Smore juju register
<rick_h> juju register jimm.jujucharms.com
<hatch> juju register jimm.jujucharms.com jaas
<hatch> rick_h don't forget the fancy name :)
<rick_h> doh!
<Budgie^Smore> ah ok, I vagurely remember that from somewhere :)
<rick_h> Budgie^Smore: *cough* in that docs page *cough*
<Budgie^Smore> rick you sure about that?
<hatch> Budgie^Smore were there places you looked for that command? It's possible we should have it in more places, or more accessible places
<Budgie^Smore> I was looking at the controller page of the jaas branch for starters rick_h
<tvansteenburgh> rick_h: sorry, i don't see how someone could update a resource that was already released ?
<Budgie^Smore> rick_h but that is after I scanned that page and couldn't see anything about registering to the controller
<tvansteenburgh> rick_h: e.g. `charm release wily/django-42 --resource website-3` <- afaik, neither django-42 nor website-3 can be changed now right?
<hatch> Budgie^Smore you're right, it's on a special CLI docs page
<rick_h> tvansteenburgh: sec otp
<hatch> Budgie^Smore https://github.com/juju/docs/blob/jaas/src/en/jaas-cli.md
<hatch> we should probably make these options more obvious at the top of each page
<rick_h> tvansteenburgh: I can always do charm release wily/django-42 --resource website-4
<hatch> thanks Budgie^Smore
<tvansteenburgh> rick_h: ok, i didn't know that
<tvansteenburgh> rick_h: but in my case, i don't think someone went through every old charm rev and updated the resource
<Budgie^Smore> hatch I keep forgetting that the CLI is it's own section, to be honest I almost always expect to see UI and CLI steps in the same place in the docs
<tvansteenburgh> technically i see it's possible now, i'll need to check
<Budgie^Smore> hatch to me the CLI page is for stuff the CLI can do that the UI can't
<rick_h> tvansteenburgh: k, just stating what can be done there.
<Budgie^Smore> oh and hatch, (based on my login just now) rick_h had it right, you can't pass 'jaas' that way... register seems to take only 1 argument
<hatch> heresy!
<hatch> Budgie^Smore 2.1.0?
<hatch> er, version 2.1.0
<hatch> wait you
<hatch> re right
<Budgie^Smore> http://paste.ubuntu.com/24230847/
<hatch> I failed
<hatch> plz disregard me :)
<Budgie^Smore> sometimes you gotta give credit where credit is due
<hatch> haha indeed
<cory_fu> layer-basic PR for general review: https://github.com/juju-solutions/layer-basic/pull/92
<Budgie^Smore> lazyPower I am going to "stop" that cluster until we can determine what we want to get off it and how
<tvansteenburgh> ninja'd marcoceppi
 * tvansteenburgh notches belt
<kwmonroe> marcoceppi: you are awesome... i'm pretty sure you're responsible for charm build messages like "build: layer.yaml includes hbase-quorum which isn't used in metadata.yaml".  saving my hide all day chief!
<magicaltrout> i told you not to break it....
<Budgie^Smore> OK next crazy question ... where does juju get the 10.0.0.0 IP range when it is deploying lxd containers?
<kwmonroe> hell, this juju magic makes it damn near impossible to break things.  #amirite magicaltrout?
<cnf> haha!
<cnf> funny
<Budgie^Smore> ok I have found what I was looking for, what's the best way to override the IP address scheme when deploying LXD into a mixed LXD / machine environment so that they share the same range?
<Budgie^Smore> ok I feel like I am missing something basic when it comes to the Juju deployment of LXD containers
<smgoller> Hey all, how does juju tell maas about containers it creates?
<smgoller> i.e. in the code
<rick_h> Budgie^Smore: so having containers on the same host network range is only supported on Maas right now
<rick_h> Budgie^Smore: aws only lets a host have 1 Mac address so containers can't have the addresses and the host for instance
<rick_h> Budgie^Smore: there's work to make that work on manual/OpenStack/etc
<rick_h> Where there's something to do up address management on the network.
<cnf> does anyone know what this means: 'can''t get info for image ''juju/xenial/amd64'': not found'
<cnf> also, how do i kick something to tell it stuff has changed, and it should try again?
<Budgie^Smore> rick_h ah ok now that make sense
<kwmonroe> cnf: wadda you mean by stuff?  if a charm is in an error state, you can say "juju resolved <foo>/<x>" to make juju retry the last thing that may have failed.
<cnf> it's not in an error state
<Budgie^Smore> rick_h do all the instances need public IPs or can internal stuff use the VPC's private IP and only give public IPs to instances I want access to?
<kwmonroe> rick_h: is the bundle spec public, or rather, can i make a wish list for the next bundle spec?
<cnf> it just says 'can''t get info for image ''juju/xenial/amd64'': not found'
<kwmonroe> rick_h: my kingdom for "services:  foo:  charm:  foo\n channel: edge"
<cnf> so idno what the fuck to do :/
<rick_h> kwmonroe: file bugs on GH/juju/charm maybe?
<cnf> i can't use retry-provisioning, because that doesn't support containers
<cnf> so how do i kick it?
<kwmonroe> sure rick_h - i don't mind opening an issue, just wanted to get the right place... for ex, https://github.com/juju/charmstore/blob/v5-unstable/docs/bundles.md is the last place i saw reference to version X of a bundle spec.
<kwmonroe> if gh/juju/charm is the right place, i'll open my wants there.
<rick_h> kwmonroe: yes there's a v6 unstable branch now
<cnf> so how do i make it retry provisioning containers? o,O
<rick_h> cnf: juju retry-provisioning ?
<cnf> rick_h: doesn't support containers!
<cnf> error: invalid machine "1/lxd/0" retry-provisioning does not support containers
<rick_h> cnf: ah my apologies.
<cnf> rick_h: i do apologize if i'm a bit snappy, been running into problems like this for a few weeks now
<rick_h> Understand, my fault. I recall that now.
<cnf> i keep running into deadlock issues it seems
<cnf> not confidence inspiring
<cnf> i have stuff that has failed, and no bloody way to fix it >,<
<cnf> hmm
<rick_h> cnf: I'm at my son's violin. Do me a favor. Let's regroup tomorrow. I'd like to help but not sure about the background of your setup and what state things are in. Maybe tomorrow we can set a base via Hangouts and see where to go forward.
<cnf> rick_h: that'd be nice, i'm in CET though
<cnf> don't know if your timezone matches up
<cnf> rick_h: but now, be at your sons violin
<rick_h> EST here
<cnf> you ain't getting that back
<kwmonroe> oh lordy rick_h.  kid's violin.  you are a brave soul.
<rick_h> All good, he's with a teacher
<rick_h> I'm sitting outside but we're about to leave. Lessons end in 2min
<cnf> rick_h: i'll be at the office between 10:00 and 18:00 CET tomorrow
<magicaltrout> Junior Strings used to practice down the hall from the Junior Jazz Band when I was a kid
<magicaltrout> it was like strangling cats
<rick_h> Cnf ok, so your some 5hrs ahead of me
<cnf> rick_h: i'll go in late(er) and stick around a while
<rick_h> cnf: so I'll be a bit late for your morning but will ping when I get my breakfast in from of my computer
<cnf> i can make that 11:00 to 19:00 or so
<cnf> rick_h: thanks, that'd be appreciated
<Budgie^Smore> rick_h one more crazy question if you are still around, is there a way to specify the AWS VPC ID to use in the GUI?
<hatch> Budgie^Smore nope
<Budgie^Smore> hatch can you do it from the cli using jaas or am I going to need my own controller?
<hatch> hmm
<Budgie^Smore> I came across a juju bootstrap command that forces the vpc-id using --config
<hatch> right....
<Budgie^Smore> I am wondering if I constraint the add machine to a space that is assigned to a subnet in the right VPC if that would be sufficient
<hatch> Budgie^Smore so I'm not actually sure, there was some discussion around vpc-id on a per model basis but that's outside of my ballpark
<hatch> I can find out tomorrow though when the correct peeps get in
<hatch> are you able to file a bug on the GUI project to that effect and I can reply in kind?
<Budgie^Smore> hatch I think it would be awesome to have a per model VPC
<hatch> tomorrow that is
<Budgie^Smore> I will try and get around to that, I need to get this cluster up today if I can
<hatch> sure, thanks, I'll make a note none the less to bring it up, so if you're around tomorrow I'll try and get back to you
<Budgie^Smore> trying to secure it as best I can at the moment and the legacy stuff is using the default vpc is a mess
<Budgie^Smore> hey hatch so I started "hand building" my model using the CLI and I don't see the mode in the jaas version of the GUI
<hatch> Budgie^Smore when you run `juju list-controllers` do you see the jaas controller?
<hatch> selected
<hatch> it should have a * beside it
<Budgie^Smore> yeah and I didn't get any errors running juju deploy
<Budgie^Smore> jaas*  k8s-aws     gburgess@external  (unknown)                      -         -     -  2.0.0
<hatch> can you run `juju list-controllers--refresh`
<hatch> it's showing there that you have no models/machines
<hatch> those last few dashes
<hatch> er
<hatch> can you run `juju list-controllers --refresh`
<Budgie^Smore> yeah looks like it is having problems aloocating a machine, if what I am reading from juju status is anything to go by
<Budgie^Smore> ok that command still hasn't returned hatch
<hatch> hmm
<hatch> it's fast here
<hatch> maybe it got hung in the wild wild webs
<hatch> :)
<hatch> Budgie^Smore when you visit https://jujucharms.com/u/gburgess do you see your model? You may have to log in if you haven't already
#juju 2017-03-23
<Budgie^Smore> that is what I am saying hatch ... didn't complain about anything unusual creating the model from the CLI but try see it int the web UI it doesn't exist
<hatch> that's....odd
<hatch> let me give it a try here
<Budgie^Smore> oh and the --refresh command is doing http://paste.ubuntu.com/24231722/
<hatch> hmm ok let me see what I can find out for you
<hatch> gimme a few minutes
<Budgie^Smore> sure but I feel I am going to have to use a hosted juju controller to get around this for now
<Budgie^Smore> not a huge deal was just using Jaas to avoid having to have a separate juju controller instance
<Budgie^Smore> oh that is ashame that you can update the pastebin paste
<Budgie^Smore> ok nevermind about the --refresh problem, I see what it is... it is trying to refresh a controller that is powered off
<hatch> yeah, I'm trying to figure out why the models aren't showing up still :)
<Budgie^Smore> although I would suggest that that is technical is a bug since it should really timeout on controllers it fails to access rather than trying indefinitely
<hatch> oh definitely
<Budgie^Smore> looks like it is set to retry every 7 secs too
<Budgie^Smore> correction 2mins 7secs-ish
<Budgie^Smore> hatch yeah I would kinda expect to see a blank model in the web ui after issuing the add-model command and before doing any deploy command
<Budgie^Smore> hmmm ok now it is has shown up
<Budgie^Smore> I am hitting the replication buffer
<Budgie^Smore> 2 mysteries solved, 1 to go
<Budgie^Smore> hatch ^
<hatch> :D
<hatch> which one is to go still?
<Budgie^Smore> why it can't create the instance
<Budgie^Smore> and I know what that is just don't think I can prove it easily and more importantly safely... I suspect it is the default vpc issue for the model since I am using 'juju deploy etcd --constraints "instance-type=m3.medium spaces=private"' to deploy it and the "private" space is in a different VPC that what jaas can currently handle
<rick_h> Budgie^Smore: that's perfectly possible. A bug with what you're trying to do would be great as early beta feedback so we can investigate and see what we need to do.
<rick_h> Budgie^Smore: as much copy/paste as possible.
<Budgie^Smore> I think it is more of a feature request of having the ability to assign a model to a vpc-id some how but that is very cloud specific so I don't know what priority it would be worth giving it
<Budgie^Smore> from a security stand point how it currently is working makes sense too, juju shouldn't step outside the default VPC unless given the ID to do  so
<Budgie^Smore> hatch ^ make sense?
<hatch> definitely
<Budgie^Smore> the bug I will file though is the one lack of a timeout on --refresh when you have controllers that are inaccessible at the point in time
<hatch> thanks
<Budgie^Smore> unfortunately that does mean the end of my "experiment" of JaaS for now
<hatch> sorry Budgie^Smore
<hatch> I'll talk with the team tomorrow and see what I can find out about the VPC issue
<Budgie^Smore> don't be, it was fun and love to see how far that has come so far... AWS is only temporary for me anyway
<hatch> :)
<hatch> I have to run, have a good night Budgie^Smore
<Budgie^Smore> I want to see the design spec for when you can use JaaS with say a private MaaS environment
<rick_h> Budgie^Smore: patience, one step at a time :)
<Budgie^Smore> rick_h oh I have that, in the meantime I have to use the "workaround", it's all good :)
<Budgie^Smore> rick_h that comment was more a vision of the potential it has in my "use-case"
<Budgie^Smore> hmmm looks like the example I am trying to "follow" is using a future feature
<Budgie^Smore> --to "subnet=subnet-bb1ab2dc"
<Budgie^Smore> https://bugs.launchpad.net/juju/+bug/1659640
<mup> Bug #1659640: Juju needs to support specifying subnets for controller <juju:Fix Committed by jameinel> <https://launchpad.net/bugs/1659640>
<rick_h> Budgie^Smore: yea, but that was landed recently. It's in the alpha out now
<Budgie^Smore> rick yeah but hasn't hit any ppa repo yet, right?
<rick_h> Budgie^Smore: it's in the juju devel ppa
<rick_h> Budgie^Smore: or use the snap :)
<rick_h> Budgie^Smore: then you can keep things in apart nicely
<Budgie^Smore> rick_h I found a page that said the devel was on 2.1.2!
<Budgie^Smore> rick_h of course I have "lost it" again
<rick_h> Budgie^Smore: yea, I think you're right. Maybe it's only in the snap now? /me checks around
<rick_h> https://launchpad.net/~juju/+archive/ubuntu/daily has 2.2 alpha
<rick_h> and here: https://launchpad.net/~juju/+archive/ubuntu/devel
<rick_h> that has 2.2
<Budgie^Smore> yes I see that but I didn't look at the repo... not sure if I want to run an alpha build though and I had another "workaround" that worked too :) remove a subnet temporarily and use --to zone=us-west-1a
<rick_h> Budgie^Smore: and http://paste.ubuntu.com/24232393/
<rick_h> Budgie^Smore: yea, that's why I like the snap way. You get a new /snap/bin/juju to work with
<rick_h> Budgie^Smore: but yea, it's always the way. The feature you need/want is in the NEXT release :)
<Budgie^Smore> yeah I should probably use snap more
<Budgie^Smore> oh hang on a min... edge is at 2.2 beta and beta is at 2.2 alpha?
<Budgie^Smore> lol ok that is mildly amusing
<rick_h> Budgie^Smore: so edge is a daily build that tracks "the next release"
<rick_h> Budgie^Smore: so the non-released, but next target (e.g. what the source code says it is currently if you build from source...) is beta
<rick_h> late night for me
 * rick_h goes to take out the trash and calm down before bed
<Budgie^Smore> oh it is going to be a late one for me
<Budgie^Smore> just seems my day to hit weird and wonderful roadblocks
<kjackal> Good morning Juju world!
<ybaumy> moin
<ybaumy> i forgot to set VIP for mysql or hacluster
<ybaumy> how do i do that after deployment
<ybaumy> ah i got it
<ybaumy> juju config mysql vip=IP
<ybaumy> juju resolved mysql/0 .. 1 .. 2
<cnf> morning
<cnf> lets try this again
<BlackDex> hello there. I'm using juju 2.1.2 and spaces. I Have a private and a public space. But if i want to deploy mediawiki for instance, i'm not able to let it use the two spaces. It will only bind to one.
<BlackDex> Is there a setting i'm missing?
<BlackDex> thx for jamespage on the other channel. using --constrains "spaces=public,private" does the job
<SimonKLB> does anyone know if it's possible to run the collect-metrics script in the layer-basic venv?
<SimonKLB> ideally run it in the reactive framework somehow
<kjackal> SimonKLB: you mean it like a feature request?
<SimonKLB> kjackal: more like if it was already possible
<tvansteenburgh> SimonKLB: i thought collect-metrics is a hook
<tvansteenburgh> hmm, maybe that was just unfortunate naming in the docs
<kjackal> SimonKLB: there is a separete layer https://github.com/CanonicalLtd/layer-metrics/blob/master/hooks/collect-metrics and here is an example on how to use it https://github.com/juju-solutions/bigtop/tree/master/bigtop-packages/src/charm/hadoop/layer-hadoop-resourcemanager
<SimonKLB> kjackal: i know how metrics work normally
<SimonKLB> i was wondering if you could run it in the reactive framework somehow, so that you could access the virtualenv from layer basic easily
<SimonKLB> tvansteenburgh: i might be mistaken, but i think the metrics works a bit different
<mattyw> collect-metrics is just a hook. so you could use it with just layer basic by doing @hook.Hook("collect-metrics")
<SimonKLB> mattyw: ah that's neat!
<SimonKLB> so now i just need to get charmhelpers somehow :D
<mattyw> ^^ that decorator is from memory so it might not be totally copy/paste accurate
<mattyw> SimonKLB, I think charmhelpers comes with layers right?
<SimonKLB> currently, im accessing charmhelpers from the virtualenv created by layer-basic
<SimonKLB> mattyw: yea
<mattyw> SimonKLB, depending on what you're doing there's nothing to stop you just having an executable bash script called collect-metrics in the hook directory
<mattyw> SimonKLB, the main difference between the collect-metrics hook and other hooks is that errors in the collect-metrics hook won't put the charm in an error state
<SimonKLB> mattyw: so normally when you use the reactive framework it creates the hooks for you so that they all execute the reactive main function
<SimonKLB> im not sure what the best practice is here when using metrics
<SimonKLB> since that has its own hook which then executes commands depending on which metric it is collecting
<mattyw> SimonKLB, ah yes I see, I'm sure there was a recent version that included it, if not there should be
<mattyw> SimonKLB, is there any reason you don't want to use the metrics layer?
<SimonKLB> mattyw: i'd like to access the relations if possible so that i can fetch data from a database-charm when generating the metrics
<SimonKLB> that would be very easy if i could run it in the reactive loop
<rick_h> cnf: morning, how goes things out east?
<kjackal> SimonKLB: for what is worth, if you want to activate the venv of the basic layer you can do so like this:  https://github.com/juju-solutions/layer-cwr/blob/master/actions/cwr-charm-pr#L7
<SimonKLB> kjackal: yea i know, but i have to go through some unecessary hoops to import layer basic if im going to do it myself instead of letting reactive do it for me :)
<mattyw> SimonKLB, is the information you want stored in charmhelpers.core.unitdata?
<mattyw> SimonKLB, or where you hoping to be able to call relation-get?
<cnf> rick_h: nice weather! :P
<cnf> rick_h: good morning :P
<cnf> just got a fresh red bull
<SimonKLB> mattyw: relation-get hopefully :) i want the ip of the relational charm so that i can access the database-api
<mattyw> SimonKLB, unfortunately relation-get insn't available in the collect-metrics hook - that's the other way they're slightly different to normal hooks
<mattyw> SimonKLB, but in your reactive framework when those values are set from the relation the usual approach is to set them in the unitdata key value store. and then make use of them in the collect-metrics hook
<SimonKLB> mattyw: the problem is that the data is static, im basically keeping track of how many units are running of a perticualr application
<SimonKLB> mattyw: i could probably use libjuju for this though if it's not possible to get the relation data
<mattyw> SimonKLB, so you want to be able to access the database you're related to?
<mattyw> SimonKLB, but you what you really want to know is the number of units that are running?
<SimonKLB> mattyw: yea but it could probably be solved in other ways also
<SimonKLB> mattyw: well yea, or specifically the machinespecs the units are running on
<mattyw> SimonKLB, and I guess just counting cpus/ram in the collect-metric hook is not an acceptable solution?
<SimonKLB> mattyw: well the problem is that im not counting the cpu/ram on the machine the collect metric is running on, i want to get the cpu/ram on other machines that is running a different charm
<SimonKLB> (this is for the charmscaler btw)
<mattyw> ok - I see
<rick_h> cnf: send some of that weather over here. Still below freezing each morning here.
<cnf> ouch :P
<cnf> we are a country of cold and wet, so we like our sunshine :P
<tvansteenburgh> rick_h: is it possible to remove a charm from the charm store (in a personal namespace)?
<cnf> rick_h: so, are you available to help me some today?
<rick_h> tvansteenburgh: yes, the charm delete api has been wired up but it's not yet through the CLI to end users. So it's possible but still requires some manual api foo atm
<magicaltrout> make it so, my namespace is full of $hit I just hide from users! :)
<rick_h> cnf: yes, I wanted to kind of reset and ask if you've got notes/doc on what your setup looks like. How many machines, special things about them, what the spaces layout looks like, the bundle you're working to use.
<tvansteenburgh> rick_h: okay, so it's coming soon to the cli?
<cnf> sure
<rick_h> tvansteenburgh: it's not on the "on fire" list so it's coming but I don't know it's going to be out in the next couple of weeks
<cnf> rick_h: you want to do that here, or in pm, or somewhere else?
<tvansteenburgh> rick_h: ack
<rick_h> cnf: we can do here. Folks can ignore us or if you'd prefer PM feel free.
<cnf> ok, so I have a VM running the MAAS controller
<cnf> and a kvm VM on the maas controller running the juju controller
<cnf> i have 4 physical HP machines if various layouts, at least 16 cores and 24G in the smallest one
<cnf> i set up a copper network for PXE boot and maas networking, because i could not figure out howto pxe boot of a LACP link
<cnf> and all machines have a 10G LACP with vlans for various networking
<cnf> well, all but the smallest, that has just a 1G trunk for now
<rick_h> k
<cnf> i'm using a slightly modified bundle yaml i got from jamespage
<cnf> http://termbin.com/pjqj
<cnf> past few weeks, i have run in a fair amount of bugs and problems with both maas and juju
<cnf> right now, my state is http://termbin.com/2yke
<rick_h> cnf: k, so the machines look like they've got two devices? or are there more with the bonds?
<cnf> uhm, they have a ton of devices :P but active it's 1 for PXE / MAAS, and one bond with 2 or 3 vlan tags
<cnf> configured in maas
<rick_h> cnf: k
<cnf> https://www.dropbox.com/s/tupaiwe049g8rzk/Screenshot%202017-03-23%2014.18.32.png?dl=0 for example
<rick_h> cool yea I see machine 0 with 4 addresses in the machine status output
<cnf> so what stands out to me, atm (i'm hammering the nails as i see them...)
<cnf> message: 'can''t get info for image ''juju/xenial/amd64'': not found'
<cnf> rick_h: i should add that only the public network on machine 0 can access the internet directly
<cnf> and i have http-proxy and https-proxy set
<cnf> which needs to go over the maas network
<rick_h> cnf: ok, yea I see all the lxd on machine 0 is in pending and that is lxd reaching out for the container images
<cnf> right, so i think it's not respecting proxy settings?
<rick_h> cnf: yea, I'm looking. So lxd supports these per https://github.com/lxc/lxd/issues/2147
<rick_h> cnf: and https://bugs.launchpad.net/juju/+bug/1594720 says the fix is release ...
<mup> Bug #1594720: lxd containers not using configured proxy for downloading images <2.0> <addressability> <lxd> <network> <proxy> <sts> <juju:Fix Released by thumper> <https://launchpad.net/bugs/1594720>
<cnf> hmm
<rick_h> cnf: so looking at the LP bug there's some commands to run that get the LXC config to see if the proxy is set and we can then check if it's set correctly or not.
<rick_h> cnf: https://bugs.launchpad.net/juju/+bug/1594720/comments/15
<mup> Bug #1594720: lxd containers not using configured proxy for downloading images <2.0> <addressability> <lxd> <network> <proxy> <sts> <juju:Fix Released by thumper> <https://launchpad.net/bugs/1594720>
<cnf> looking for the lxc command to show that
<cnf> rick_h: $ sudo lxc config get core.proxy_http
<cnf> http://172.20.20.1:8000
<rick_h> cnf: and that's the correct value for the proxy?
<cnf> yep
<rick_h> cnf: what about https?
<cnf> and it is reachable from the host
<cnf> set as well
<rick_h> does the proxy also run for https?
<cnf> yep
<rick_h> cnf: ok, so then we should try to find in the lxc/lxd logs to see where the actual urls being hit for the images is and see if that lines up correctly or if it's going somewhere crazy
<cnf> $ curl https://cloud-images.ubuntu.com/releases returns a 301
<cnf> https://bpaste.net/show/a39688f501a9
<rick_h> cnf: do you have the actual error message about the image? I'm not sure on the 404 lines in that log.
<cnf> rick_h: it literally says lvl=info msg="Sending top level 404" t=2017-03-22T18:11:16+0000 url=/1.0/images/juju/xenial/amd64
<rick_h> cnf: that's the lxd images right?
<rick_h> cnf: sorry the lxd logs
 * rick_h gets more coffee in him
<cnf> that's from /var/log/lxd/lxd.log
<cnf> :P
<cnf> i'm on my 2nd red bull :P
<rick_h> cnf: but in the juju end was there a log that stated anything ?
<rick_h> cnf: like in the machine-0 log or something?
<cnf> yes, in http://termbin.com/2yke
<cnf> containers:
<cnf>       2/lxd/0:
<cnf>        machine-status:
<cnf>           current: provisioning error
<cnf>           message: 'can''t get info for image ''juju/xenial/amd64'': not found'
<rick_h> right, thanks. Sorry I ended up searching for "xenial" but on the bundle pastebin
<cnf> :P
<cnf> i don't know where the path /1.0/images/juju/xenial/amd64 should exist
<rick_h> cnf: yea, I'm just looking to see if anyone from lxd is around that can shed more light on that.
<cnf> right
<rick_h> cnf: I've not used lxd in a proxy setup and so I'm not sure if that's normal "Oh, I don't have an image so I'll go get one" or something failed and that's "404 I don't have and could not get that image"
<cnf> uhu
<cnf> i have not used lxd at all
<cnf> well, not directly
<jam> rick_h: cnf: IIRC ivoks had set up something with offline support for cloud-images and found the images were in the wrong directory
<cnf> hmm
<jam> (he had done something like an rsync of the cloud-images directory, but turns out on the host machine they are in the wrong path and an apache rule puts them in the right one, IIRC)
<cnf> uhm
<cnf> you lost me, there
<cnf> apache rule?
<cnf> apache where?
<jam> cnf: so, the URL we care about is likely to be http://cloud-images.ubuntu.com/releases/streams/v1/index.sjson
<jam> cnf: on that machine you should be able to run
<jam> lxc image list ubunt:
<jam> sorry
<jam> lxc image list ubuntu:
<jam> which reads the above URL and http://cloud-images.ubuntu.com/releases/streams/v1/downloads.sjson
<jam> to return a big list of officially released images
<cnf> that's empty
<cnf> https://bpaste.net/show/be8d401c7f1c
<jam> sorry http://cloud-images.ubuntu.com/releases/streams/v1/com.ubuntu.cloud:released:download.json
<jam> cnf: I think you need a :
<jam> "lxc image list ubunt:"
<jam> ubuntu:
<jam> wow, that last u doesn't like me
<cnf> error: Get https://cloud-images.ubuntu.com/releases/streams/v1/index.json: Unable to connect to: cloud-images.ubuntu.com:443
<jam> otherwise you're searching for a local image named 'ubuntu' vs listing the images in the 'ubuntu:' remote
<jam> cnf: if you have http-proxy set, do you have https-proxy set?
<jam> cause LXC only uses https-proxy it doesn't fall back to http-proxy for 443 traffic
<cnf> jam: yes
<cnf> curl https://cloud-images.ubuntu.com works fine
<jam> and:  sudo lxc config get core.proxy_https
<cnf> $ sudo lxc config get core.proxy_https
<cnf> http://172.20.20.1:8000
<jam> cnf: if "lxc image list ubuntu:" isn't listing any images
<jam> that's why Juju can't find them
<cnf> jam: right... but why? i can curl it just fine with those proxy settings?
<jam> cnf: echo $https_proxy and echo $http_proxy ?
<cnf> show the same values
<cnf> $ echo $https_proxy
<cnf> http://172.20.20.1:8000
<cnf> it's all set by juju
<jam> cnf: well, that doesn't mean we can't get something wrong. The only other quick thing I can think of is to try bouncing the lxd agent
<jam> sudo service restart lxd
<jam> or is it "sudo service lxd restart"
<jam> I usually get that wrong the first time
<jam> anyway, have to go pick up my wife
<cnf> i went for sudo service lxd restart :P
<jam> you're at a point where I'd have to drop into #lxd and ask stgraber
<cnf> hmz
<rick_h> jam: cnf yea, where I was going to head next for the lxd specifics on that.
<cnf> jam: #lxd is pretty much empty
<cnf> is it #lxc?
<cnf> it's #lxcontainers
<rick_h> #lxcontainers
<rick_h> yea
<cnf> i wonder what timezone they are on...
<rick_h> a mix, I know the lead is in my timezone ish.
<cnf> right
<cnf> i need a vacation :/
<cnf> rick_h: why do i always run into the weird bugs? :P
<rick_h> cnf: heh, because you're doing the fun complicated real world install
<cnf> yeah :P
<cnf> the "fun" part is wearing off though
<cnf> 3 weeks, and still nothing running is getting frustrating
<rick_h> cnf: it's hard to setup a lab for this to put into automated testing, but it's this stuff that has to work to go out there into production. Sorry you're hitting it. Looking at bugs/etc thought this should work ok
<cnf> uhu
<magicaltrout> reminds me of when SQL had about 100 different dialects that automatic SQL generators had to support. All seemed fine until you tried something tricky! ;)
<magicaltrout> lots of moving parts in LXD, Juju and MAAS
<cnf> uhu
<cnf> when i'm working from home
<cnf> i have zelda to comfort me in the waiting periods :P
<cnf> here, no such luck
<magicaltrout> sounds a bit lewd! ;)
<cory_fu> jamespage: Any objection to me re-promulgating ganglia and ganglia-node under a focused ganglia-team (that I'm going to create) which is owned by openstack-charmers?
<jamespage> cory_fu: works for me
<jamespage> cory_fu: might have one already one sec
<cory_fu> Cool.  Also, of note, I'm looking at merging the ganglia-node review
<jamespage> cory_fu: https://code.launchpad.net/~ganglia-charmers
<cory_fu> jamespage: Ok, cool.  I'll get that sorted out, then
<jam> cnf: are you able to curl the full url http://cloud-images.ubuntu.com/releases/streams/v1/index.sjson
<jam> I'm wondering if your proxy is doing caching, and maybe has cached something poorly
<cnf> jam: yes
<cnf> jam: curl works just fine
<rick_h> jam: so looking at #lxdcontainers it works from lxd, but something in how juju is setting things up causes the proxy config to not work.
<jam> rick_h: you stopped the load test, right? I tried to see the progress but just got a 404
<rick_h> jam: yes, had a machine issue here that I needed to kill it for unfortunately
<rick_h> I should setup to be able to start/run it from the maas machine vs my desktop
<jam> rick_h: np, I feel we showed what we wanted to
<cory_fu> jamespage: Did you intend for the gate-xenial in ganglia and ganglia-node to not be executable and thus be skipped?
<rick_h> jam: rgr, got over 400 models so yay
<jamespage> cory_fu: yes - we need to push up the current proposed ones first
<jamespage> cory_fu: then we can enable the xenial gate tests
<cnf> hmm
<cory_fu> jamespage: Ok, so only push them to trusty?
<cnf> rick_h: any more ideas? :P
<jamespage> cory_fu: series in metadata - its a circular depends issues
<cory_fu> jamespage: Ah, gotcha
<jamespage> ganglia <-> ganglia-node
<jamespage> cory_fu: I'll push another rev to edge once we have this set done
<rick_h> cnf: no :( we're beyond my scope. I know jam is EOD but he's the next best person that might be able to help or get someone to help. Honestly, the answer is probably going to be to get as much of this material as possible with the lxd "it works from here" added to the notes into a bug for the engineering team in juju to get at.
<cory_fu> jamespage: Ok
<cnf> hmm
<rick_h> cnf: I'm sorry, I was hoping we'd be able to walk through a hole in the setup, but it's setup right. From your end everything *should work*
<cnf> it should, indeed
<cnf> rick_h: i appreciate your time none the less
<rick_h> proxy is set, lxd does work, but somehow juju isn't doing the right thing
<cnf> it at least validates i didn't do something stupid
<cnf> (it has been know to happen)
<rick_h> cnf: completely, I appreciate you working through that with the lxdcontainers folks to make sure the bug isn't there. That's really helpful and one less layer to try to go through
<cnf> ok, lets hope jam becomes available soon
<rick_h> cnf: so he's UTC+4 so that's why I think for today the best thing is to get the bug filed with all the notes from today
<rick_h> cnf: and then we can ask nicely for him to find someone to peek at it in the morning.
<cnf> hmz
<cnf> these timezones are killing me :(
<rick_h> cnf: yea, I guess we've gotten used to it over time so it seems natural but it's definitely not helpful in this situation.
<cnf> rick_h: what is grating on me, is that i have been doing this for several bugs for weeks now
<cnf> and getting them fixed just takes so long
<cnf> get 1 baby step, then wait for the next day
<cnf> rick_h: and i don't mean to invalidate your effort
<cnf> just gets frustrating on my end :P
<rick_h> cnf: I totally understand. /me will look away while you go kick something
<cnf> :P
<magicaltrout> on the flip side cnf if you ever want to job swap, you'll be straight into to a Juju Consultancy role! ;)
<cnf> ha!
<cnf> magicaltrout: i actually am a freelance IT consultant
<kwmonroe> tvansteenburgh: amulet issue when tearing down the class:  http://paste.ubuntu.com/24235292/
<magicaltrout> if you're paid by the hour, surely broken Juju is actually a bonus? ;)
<cnf> magicaltrout: it's a long term project, so i get paid either way
<cnf> magicaltrout: but i would like something to WORK eventuakky
<magicaltrout> that is a valid requirement I guess
<magicaltrout> maybe pitch it as a long term goal ;)
<cnf> for my sanity, if nothing else
<magicaltrout> aye i know the feeling
<cory_fu> jamespage: What repo are the new ganglia and ganglia-node revisions coming from?  The "Submit a Bug" links point back to lp:charms/ganglia[-node] but those don't have any commits newer than 2014.
<magicaltrout> try talking to kwmonroe about big data
<magicaltrout> it'll soo grind you down
<magicaltrout> +n
<cnf> magicaltrout: i don't like waking up and thinging "ugh, not more of this, again"
<jam> cnf: rick_h: so it seems that things like "lxc list" are driven by the client, not the lxd agent, which means 'jujud' has to find cloud-images itself. I seem to remember menn0 fixing a bug recently about Juju and http_proxy not getting set early enough in the lifetime of jujud
<kwmonroe> magicaltrout: shouldn't you be spinning up drill bits?
<jam> cnf: the other thing that you could try is setting things like http_proxy= in /etc/environment and then restarting agents
<magicaltrout> sorry pink face
<jamespage> cory_fu: if you're ok with the charms I can setup all of the metadata - but basically the bug reporting is on launchpad
<jamespage> and the git repos are on github.com/ganglia-charms
<jamespage> cory_fu: https://bugs.launchpad.net/charm-ganglia and https://bugs.launchpad.net/charm-ganglia-node
<jamespage> cory_fu: and then I'll go delete the old branches in bzr
<cory_fu> jamespage: Ok.  I'm getting the "no module named yaml" from ganglia, so it needs the tests.yaml and requirements.txt changes that you did in ganglia-node
<cory_fu> But otherwise they seem good
<jamespage> cory_fu: odd I thought I did that
<jamespage> coreycb: lemme check again
<cory_fu> jamespage: I'm over here. ;)
<cnf> https://bugs.launchpad.net/juju/+bug/1675440 for those that want to track it
<mup> Bug #1675440: juju unable to get LXD images <juju:New> <https://launchpad.net/bugs/1675440>
<jamespage> cory_fu: doh
<jamespage> I wish irccloud would follow a converstion with nics
<jamespage> cory_fu: -6 imported into reviews.j.c
<cory_fu> jamespage: Hrm.  Yeah, the review does seem to have those bits.  Wait, I think it's my fault
<jamespage> cory_fu: no it was missing the requirements.txt
<cnf> jam, rick_h how can I tell juju to retry an lxd?
<cnf> resolved is for units, and retry-provisioning doesn't work on containers
<cory_fu> jamespage: Ah, ok
<cory_fu> Thanks
<cnf> jam: i edited /etc/environment, and restarted lxd and jujud-machine-2
<jam> cnf: there is "juju retry-provisioning" but I don't think it supports containers
<cnf> jam: it doesn't
<cnf> hmm
<cnf> rick_h: any suggestions?
<cnf> beside a reboot?
<cnf> hp servers take forever to reboot :P
<rick_h> cnf: no, just to manually do the commands vs using the bundle
<cnf> rick_h: ok, what commands?
<rick_h> cnf: e.g. deploy or add-unit with a placement directive into a container e.g. "juju add-unit XXXX --to 0:lxd"
<rick_h> cnf: or just test juju deploy ubuntu --to 0:lxd
<cnf> rick_h: so i have to remove them, and add them again?
<cnf> ERROR cannot add application "ubuntu": placement scope: invalid model UUID "0"
<jam> cnf: lxd:0
<jam> '0' is the machine ID
<jam> in your case it would be
<jam> you probably want "juju add-unit ubuntu --to lxd:2"
<jam> given your earlier messages about being on machine 2
<jam> ah sorry, your bug says machine 1
<cnf> allocating
<cnf> oh, bah
<cnf>  message: 'unable to setup network: no obvious space for container "2/lxd/1",
<tvansteenburgh> kwmonroe: the amulet test failure is repeatable?
<tvansteenburgh> kwmonroe: i don't know why you'd get that behavior unless your ~/.local/share/juju/store-usso-token disappeared
<kwmonroe> tvansteenburgh: the same charm (spark-21) failed in all my clouds: http://bigtop.charm.qa/cwr_bundle_spark_processing/4/report.html.  i'll pull it into a simpler test case with the ubuntu charm to make sure.
<kwmonroe> tvansteenburgh: my question is why does "juju remove-application" need to talk to the store at all?
<cnf> rick_h: i think the new one says " container started"
<cnf> but status still shows pending
<cnf> not sure is this is expected
<cnf> it has an instance-id
<rick_h> cnf: hmm, so what does juju show-machine show for that?
<rick_h> cnf: sorry, I'm in a meeting so a little distracted atm
<cnf> rick_h: machine-status: running, juju-status: pending
<cnf> so it got farther than the other ones
<rick_h> cnf: hmm, ok
<tvansteenburgh> kwmonroe: it has to auth to the controller
<cnf> are we sure juju/xenial/amd64 is a valid image?
<cory_fu> jamespage: ganglia and ganglia-node are GTG.  So, you're going to push them to ~ganglia-charmers and re-promulgate?
<jamespage> cory_fu: I can do that
<kwmonroe> oooohhh.. yeah tvansteenburgh.  that makes sense.  let me double check my login status on the machine where it's failing.  this is probably just pilot error.
<cory_fu> jamespage: Ok, let me know if you need anything from me, then.
<cory_fu> jamespage: I'm happy to help with any parts of the process
<jamespage> cory_fu: no worries - thankyou for the reviews :-)
<cnf> rick_h: maybe it's a broken charm, or something
<cnf> though that would be weird, jamespage would have run into this, then
<hatch> Budgie^Smore `juju add-model --config vpc-id=<id>`
<cory_fu> jamespage: Should I go ahead and close said reviews now, or wait for confirmation that they're re-promulgated?
<cnf> hmz
<jamespage> cory_fu: please go ahead - I'll wrap up my meeting and move things around as discussed
<cnf> uhm
<cnf> wth
<Budgie^Smore> hatch I will give the add-model --config vpc-id a whirl, I broke the model again last night so bad at the controller level :)
<hatch> oops! :)
<rick_h> cnf: what's it doing now? It was pending but no reason given?
<cnf> juju-status is pending on the ubuntu one
<rick_h> cnf: this is from just "juju deploy ubuntu"
<cnf> machine-status is deployed
<rick_h> cnf: right, and if you show-machine 0 there's no more verbose output there?
<cnf> no
<rick_h> cnf: can you ssh to the new machine then? Does it have an IP addr and such?
<cnf> juju-status:
<cnf>           current: pending
<cnf>           since: 23 Mar 2017 16:19:26+01:00
<cnf>         instance-id: juju-857117-2-lxd-2
<cnf> rick_h: uhm, not directly
<cnf> it didn't get an ip in the maas/juju network
<ThiagoCMC> Hey guys, I'm very familiar with OpenStack on Ubuntu (via apt-get and manual setup), however, from now on, I want to start using Juju but, I never used it before...
<ThiagoCMC> Is there any way to install OpenStack via Juju, without MaaS, just everything in one box?
<Budgie^Smore> it probably wasn't truly broken but my Kubernetes master wouldn't start waiting for pods to start and I couldn't be too bother troubleshooting that on a clean model
<ThiagoCMC> I'm seeing that conjure-up can help with that but, what about going straigh with Juju instead?
<rick_h> cnf: is there anything useful in the debug-log or the machine-0 log on the host machine?
<cnf> but i can ssh to it
<admcleod> ThiagoCMC: you could try https://docs.openstack.org/developer/charm-guide/openstack-on-lxd.html
<jamespage> cory_fu: https://jujucharms.com/ganglia/ and https://jujucharms.com/ganglia-node all done
<ThiagoCMC> admcleod, thank you!
<jamespage> ThiagoCMC: alternatively try with conjure-up
<jamespage> ThiagoCMC: http://conjure-up.io/
<ThiagoCMC> jamespage, you think that Conjure-up Next PPA is a better starting point?
<cnf> nothing interesting in the machine-log
<jamespage> ThiagoCMC: use the snap as detailed in that link - I think thats the right route these days stokachu ?
<ThiagoCMC> Oh, I see...
<cnf> rick_h: /var/log/juju is empty inside the container
<rick_h> cnf: right, but on the host machine it should have any notes.
<cnf> nothing of interest
<ThiagoCMC> Does Juju OpenStack supports: OpenStack Ocata with Senlin, AODH and new Ceilometer with Gnocchi?
<cnf> rick_h: i guess 2017-03-23 15:19:39 INFO juju.tools.lxdclient client_image.go:170 found image from https://cloud-images.ubuntu.com/releases for juju/xenial/amd64 = 2cab90c0c342346ea154bc2e8cacdae752a70747a755ce1f2970c9a9ebb5fe8c
<cnf> so it should bloody well work
<magicaltrout> ThiagoCMC: you need to prepend your questions with jamespage :
<magicaltrout> :)
<ThiagoCMC> ^_^
<Budgie^Smore> oh I remember why I destroyed the controller / model hatch, I hit a bug where juju wouldn't ssh proxy to private systems either by setting proxy-ssh config option or using --proxy flag but there had already a bug filed and triaged for it
<jamespage> ETOMANYCHANNELS
 * magicaltrout leaves norwich before he's hunted down
<jamespage> ThiagoCMC: Ocata: yes, Aodh: yes, Senlin: no, Gnocchi: no
<hatch> Budgie^Smore ahhh ok thanks for looking at the bugs though - appreciate any new issues too :)
<ThiagoCMC>  jamespage, Hmm... That's sad... But okay... Thanks!
<Budgie^Smore> hatch I didn't delibrately hunt for bugs but they crop up when I am searching for what I am doing wrong ;-)
<hatch> haha, hey, whatever works :)
<jamespage> ThiagoCMC: I'm hoping to put Gnocchi on the roadmap for the openstack charms but it won't be in the short term
<jamespage> ThiagoCMC: if someone would like to work on a senlin charm I'm all ears :-) but don't have time with the existing pool of resources working across the charm set (33 of them)
<cnf> rick_h: so uhm
<cnf> i removed those units
<cnf> and added them agin
<jamespage> magicaltrout: we should do a beer soon
<cnf> now it seems the containers came up
<cnf> i didn't actually _change_ anything!
<magicaltrout> indeed now the grayness has left
<rick_h> cnf: ? so what changed? Did jam have you do something?
<cnf> no!
<cnf> nothing was changed
<cnf> >,<
<ThiagoCMC> jamespage, that's okay! I have those things working now but, I'm deploying everything using my own Ansible playbooks, workd great! I have to start working with Juju to help you guys!   :-D
<cnf> shit still isn't working, but the containers at least came up
<cnf> wth
<cnf> rick_h: so all my containers are running now
<cnf> but none of them have juju come up
<cnf> wth is going on :(
<cnf> k, i'm going to destroy my model, and deploy it again
<cnf> and go for a 20 minute stroll
<admcleod> cnf: i wonder if it as something to do with one dns resolution attempt working, and one failing - maybe you have a broken dns server somewhere in a pool?
<cnf> admcleod: it seems to work for everything else
<admcleod> cnf: perhaps everything else handles such a failure gracefully? just guessing. ive just tried to reproduce with local lxd provider and proxy and i cant
<Budgie^Smore> so my first weird question of the day, how is jaas going to be handle machines that are behind a nat router? (btw hatch that problem with my model not showing up in the UI has something to do with the initial load (after login) of the page)
<rick_h> Budgie^Smore: so that's why it only works in public cloud atm
<Budgie^Smore> rick_h but even in a public cloud I can have instances behind a nat router :P
<Budgie^Smore> (tbh it is my understanding of how a VPC in AWS actually works
<rick_h> Budgie^Smore: right, but the agents do most of the work to call out to the controller and the two exceptions I can think is juju ssh and juju actions so will have to see how they behave behind a vpc
<Budgie^Smore> rick_h I am "testing" right now, so we will see, as it stands I don't think ssh (and by extension exec) will work. probably need a proxy system in the environment and juju having knowledge of it
<Budgie^Smore> rick_h I am thinking something like a poor man's VPN connection using sshuttle
<rick_h> Budgie^Smore: hmm, yea, will have to add this to the "test and figure out wtf" list
<bdx> Budgie^Smore: yo
<Budgie^Smore> give me an hour or so and I will have an initial test done
<Budgie^Smore> bdx: well howdy
<bdx> Budgie^Smore: hey, just reading some of the backlog here
<bdx> Budgie^Smore: I have plenty of envs that are running in private address spaces (behind a nat) using JAAS .... I've yet to find anything that doesn't work as I would expect ... minus ssh
<bdx> Budgie^Smore: sshtunnel and (depending on if you have a publicly available endpoing) openvpn are great remedies for access in that use case
<Budgie^Smore> bdx that is what I expect too ssh is always proablematic in those envs... I have used sshuttle in the past too, as long as I have 1 ssh accessible system in the env that is
<bdx> Budgie^Smore: are you using AWS provider?
<Budgie^Smore> bdx in this case, yes
<bdx> Budgie^Smore: have you configured nat, and igw routing tables, and configured your subnets to use the routing tables that point to the respective nat or igw?
<Budgie^Smore> bdx: yup 2 x public subnets, 2 x private subnets, 1 x igw and 1 x nat within a vpc. actually found a cloundformation yaml for it too
<bdx> Budgie^Smore: niceee, sounds like you are on the right track ... did you pipe that through to juju then?
<Budgie^Smore> bdx yeah I am actually deploying charms right now too it (using the CDK as a guide but customizing to take into account the environment)
<bdx> Budgie&Smore: nice
<cnf> rick_h: so the containers started now
<bdx> Budgie^Smore: I have configured most of my apps have both subnets configured as juju spaces such that only put internet facing proxies land on the igw subnets, and everything else behind nat gateway subnets (the way it should be)
<bdx> http://paste.ubuntu.com/24235685/
<cnf> rick_h: model isn't working, but for some reason the containers started
<Budgie^Smore> bdx pretty much doing the same :)
<cnf> rick_h, jamespage http://termbin.com/ae81
<cnf> suggestions?
<jam> rick_h: I'm really past EOD. my gut says we're failing to give an error when requesting a container that doesn't have a defined space on a machine with multiple network interfaces
<Budgie^Smore> ok "1 unit, xenial, 8x26.88GHz, 32.00GB, 8.00GB" seems wrong... I don't know of any CPU that has 26GHz cloak!!!
<jam> and then starting the container but not requesting an IP because we don't know where to put it
<jam> cnf: do you have bindings specified for your applications?
<jam> I'm out again
<ybaumy> the export button does not work for me in juju gui
<bdx> Budgie^Smore: sweet! yeah, so I just stash my openvpn on a igw subnet in my 'common-infrastructure' model in the same vpc - this is how I'm accessing my instances on the nat subnets
<cnf> jam: yes, i used jamespage bundle
<cnf> ok :(
<Budgie^Smore> bdx yeah I will do that if I have to, I will be honest and say I prefer site-to-site when it comes to VPN
<cnf> is http://termbin.com/023aq not right?
<bdx> Budgie^Smore: not sure what your mileage is with pfsense .... you can create a custom ami
<cnf> jamespage: can i get you to look at this?
<cnf> rick_h: you still here?
<Budgie^Smore> bdx for now I am just going to use one of those proxies and use sshuttle to tunnel the IP range of the VPC
<rick_h> jam sorry thought you had gone for EOD.
<bdx> Budgie^Smore: can you link me to your cloudformation yaml for that
<rick_h> cnf: sorry, in and out with the schedule ATM.
<cnf> ok :/
<Budgie^Smore> bdx https://github.com/madeden/blogposts/blob/master/k8s-existing-env/src/cloudformation/NetworkLayout.json
<Budgie^Smore> actually it is JSON but same diff
<bdx> Budgie^Smore: sweet, thanks!
<cnf> i'm close to throwing in the towel, btw jamespage, rick_h
<cnf> hmz, how do i look at an application?
<cnf> or a unit
<Budgie^Smore> ok so almost complete with this model's initial deploy :)
<admcleod> cnf: what do you mean, 'look at'?
<cnf> admcleod: inspect? show what it wants?
<cnf> status says  Incomplete relation: monitor
<cnf> what does it expect? it not not defined? is it not working?
<admcleod> cnf: wlel you can 'juju ssh unit/0' - oh, well in that case its waiting for one of the things its related to (defined in that bundle) to provide 'monitor'
<cnf> admcleod: everything is waiting on evetything atm it seems
<cnf> i just don't know why
<cnf> or what the root problem is
<admcleod> cnf: ceph-ods is waiting for ceph-mon
<admcleod> cnf: it looks like all the containers are still 'pending'?
<cnf> yes, great, and why isn't ceph-mon not coming up?
<admcleod> s/ods/osd
<cnf> yes, and they'll stay pending forever
<cnf> but i don't know why
<jamespage> cnf: sorry been otp
 * jamespage reads backscroll
<jamespage> cnf: ok so looks like you have
<jamespage> a number of LXD machines that are marked as running but don't yet have a juju status
<jamespage> https://www.irccloud.com/pastebin/gDuay0BK/
<cnf> yes, all of them
<jamespage> cnf: which is the cause of
<jamespage> https://www.irccloud.com/pastebin/oagEPOlJ/
<cnf> yeah
<cnf> jamespage: so i think? the containers expect to be in the maas / juju network?
<jamespage> jam, rick_h: this looks like issues between the jujud machine agents on the LXD containers and the controller machine
<cnf> maybe?
<jamespage> cnf: yeah pondering that
<cnf> but if so, why the fuck isn't juju doing this?
<cnf> and please do pardon the language
<cnf> i'm quite frustrated at this point
 * jamespage puts his hands on his ears
<jamespage> cnf: I don't know but lets see if we can confirm this
<jamespage> cnf: please can you "juju ssh 0"
<jamespage> cnf:  and then sudo to root
<jamespage> cnf: do a 'lxc list'
<cnf> (as a side note, and something i'll address later, juju is assigning about 10 public addresses for this o,O)
<jamespage> and then do a 'lxc exec <one of the container> bash'
<jamespage> then we can peek into /var/log/juju
<jamespage> or even /var/log/cloud-init.log
<cnf> jamespage: /var/log/juju is empty
<cnf> in the container
<jamespage> cnf: ok so the issue is earlier in lifecycle
<jamespage> cnf: how about /var/log/cloud-init*.log
<cnf> bunch of tracebacks
<cnf> tries to install stuff, but can't
<jamespage> cnf: can you pastebin them please
<cnf> makes sense, it has NO connectivity
<jamespage> cnf: as in there are no network interfaces for the container
<cnf> there is 1, but that can reach neither internet nor proxy
<jamespage> cnf: or the juju controller I suspect
<jamespage> cnf: that's my hunch at least
<cnf> so i need to manually tell juju to add the maas/juju interface to each container?
<jamespage> cnf: you should not have todo that
<cnf> right
<jamespage> cnf: 'ts why I pinged rich_h and jam
<jamespage> cnf: which network space is the controller listening on?
<cnf> space-maas
<cnf> pasted screenshot of my maas networking in pm
<jamespage> cnf: gotcah
<Budgie^Smore> OK lazyPower I now I have my "new" k8s cluster up :)
<cnf> everything in juju and maas has pushed me, quite aggressively, to put juju/maas mgmt on a separate fabric
<lazyPower> Budgie^Smore: :tada:
<jamespage> cnf: I'll ask you to trying something using the juju cli, and then we'll see if we can work that into the bundle
<Budgie^Smore> lazyPower right!
<jamespage> cnf: juju add-machine --to lxd:0 --constraints="spaces=space-maas"
<Budgie^Smore> lazyPower did we find out what happend to juju-crashdump?
<cnf> flag provided but not defined: --to
<jamespage> huh
<jamespage> lemme check that
<jamespage> juju add-machine lxd:0 --constraints="spaces=space-maas"
<jamespage> cnf:  ^^
<cnf> created container 0/lxd/9
<jamespage> cnf: ok but the important bit is that the juju status for the machine goes active
<jamespage> not pending
<jamespage> cnf: if this works, we'll need to add a "constraints: spaces=space-maas" to each of the services that needs to run in a LXD
<jamespage> cnf: 100% agree that juju should just dtrt here and its not afaict
<rick_h> jamespage: cnf so the containers need some way to phone home to the controller. If it can't reach it via some network then yes you'll need the management network available to the containers.
<rick_h> jamespage: cnf we'd have to check with jam on what the containers do if they don't have a direct device on the juju controller network but it's supposed to be routable in some way to the controller through the network the container is on
<kwmonroe> tvansteenburgh: it was totally a missing sso-token on my part.  sorry for the noise, tearDownClass is doing just fine :)
<ThiagoCMC> hey guys, how to use "conjure-up openstack" to deploy Ocata? I'm seeing that Newton is the default...
<jamespage> ThiagoCMC: ah its possible that the bundles for conjure-up have not been updated just yet - it lags the main charm/openstack release cycle a little
<jamespage> stokachu: ^^ care to comment?
<ThiagoCMC> Oh, okay...
<cnf> rick_h: i can understand that
<kwmonroe> cory_fu:  your layer-basic bits didn't break lint.  my cwr unit was dirty.  sorry for cursing at you so much this morning.  may, i can't seem to blame people as good as i used to.
<cnf> rick_h: but i'd expect juju to know this :P
<jam> jamespage: cnf: Juju doesn't know what your route tables look like. if you ask it to deploy a container that is only on 172.20.20.0/24 then that is what it does
<jam> cnf: if you want it somewhere else, ask it to put it there
<rick_h> cnf: yes, that get's back to jam's comment earlier about that.
<jam> juju deploy app --bind foo --constraints spaces=other
<jam> cnf: it is entirely plausible that you have a route from 172.20.30.0/24 to 172.20.20.1 or wherever you are hosting Juju Controller
<rick_h> <jam> John Arbash Meinel rick_h: I'm really past EOD. my gut says we're failing to give an error when requesting a container that doesn't have a defined space on a machine with multiple network interfaces
<jam> though I would expect the containers to get *an* IP, if they aren't getting anything at all, that is surprising
<cnf> they get an ip
<cnf> just not one that works to get to the juju controller
<jam> cnf: so if one of the subnets is one that containers *have* to be on (to route to internet, etc)
<jam> then add it
<jam> juju deploy foo --bind bar
<jam> tells us to put a container with access *only* to bar
<stokachu> ThiagoCMC: i think the description says newton and that needs to be updated
<cnf> hmm, so i have to manually bind _every_ container to the juju space
<cnf> o,O
<stokachu> ThiagoCMC: it should be whatever is in th elatest charm store
<jam> cnf: yes
<ThiagoCMC> stokachu, that's nice!
<ThiagoCMC> I'm about to try it now
<jam> cnf: cause if you're doing something like PIC it may be that you have a firewall between everytihng and the juju controllers
<cnf> hmm
<jam> cnf: and don't want your Apache software
<stokachu> ThiagoCMC: ill update the descriptions now but they wont show up until i do another conjure-up release
<jam> running on the same subnet as your Juju controller
<jam> as your Postgres DB
<hatch> Budgie^Smore did the vpc config option work for you?
<Budgie^Smore> hatch yes it did :)
<ThiagoCMC> stokachu, BTW, if the package "software-properties-common" is not installed, "sudo snap install conjure-up --classic" fails!
<Budgie^Smore> hatch cluster is setup, just solving how I am going to ssh to the instances right now
<cnf> jam: i don't want apache on the same subnet as juju, obviously :P
<stokachu> ThiagoCMC: really?
<ThiagoCMC> yep
<stokachu> ThiagoCMC: AH!
<jam> cnf: then create a router that can route between them
<hatch> Budgie^Smore excellent, I've triaged adding that feature into the GUI release after the upcoming one.
<stokachu> ThiagoCMC: nice find!!
<cnf> uhm
<stokachu> ill fix that now too
<cnf> huh?
<cnf> jam: you lost me
<Budgie^Smore> hatch awesome :)
<ThiagoCMC> =)
<jam> cnf: the issue is that you ask us to only put the container on 172.x.y and we did, but you don't have a route in your network from 172.x.y to the Controller
<Budgie^Smore> for some reason sshuttle isn't behaving today :-/
<stokachu> ThiagoCMC: it's for the lxd ppa
<stokachu> ThiagoCMC: totally missed that
<ThiagoCMC> yep
<cnf> hmm
<ThiagoCMC> I always start things with Ubuntu ISO and bare-minimum setup, so I can catch those things.. ha!   =P
<cnf> i'm not liking these containers much
<jam> cnf: if you don't have routes, then you can ask us to also put the containers in other subnets additionally
<jam> via the "--constraints spaces=" request
<jam> cnf: and in bundles that looks like a "constraints" section on either the applications generally or the machines specifically
<stokachu> ThiagoCMC: glad you did, thanks again
<ThiagoCMC> stokachu, another problem, after "conjure-up 2.1.2 from 'canonical' installed", "conjure-up openstack" returns: "-bash: conjure-up: command not found"
<ThiagoCMC> :-(
<cnf> well, then i have to add the spaces=space-maas on _everything_
<stokachu> ThiagoCMC: yea that one is hard to fix
<ThiagoCMC> =/
<stokachu> ThiagoCMC: you'll just need to exec $SHELL
<stokachu> or logout of the terminal and back in
<ThiagoCMC> ok
<ThiagoCMC> damn... hehe... after logout and login: "$ conjure-up openstack" returns: "snap-confine has elevated permissions and is not confined but should be. Refusing to continue to avoid permission escalation attacks"
<ThiagoCMC> any clue?
<cnf> jam: so i need a constraint, but not a binding
<jam> cnf: first, check if that works, second you could set up routes in whatever is actually managing those networks to allow 172.20* to talk to 172.10*
<stokachu> ThiagoCMC: what does `snap list` show?
<Budgie^Smore> and solved my sshultte issue :) so now I can ssh to the "private" instances for troubleshooting :)
<jam> cnf: binding is where you want the application to talk, constraint is extra information about the machines you want
<jam> cnf: the containers are likely set up to say "172.20.20.1 can you please send this packet to Juju controller at 172.20.10.1" and they're saying "sorry I can't do that"
<ThiagoCMC> stokachu, "snap list" shows: "conjure-up  2.1.2 140 ... core        16-2     1441"
<stokachu> ok that should be fine
<stokachu> hmm
<ThiagoCMC> =/
<cnf> well, they are
<stokachu> ThiagoCMC: asking in #snappy
<jam> cnf: now, choosing to firewall in that fashion is something people can chose to do, but if they do that, then they need to give us some other way to talk to the controller
<ThiagoCMC> ok
<cnf> jam: my point was i expected juju to sort that out
<cnf> or at least communicate it clearly
<jam> cnf: the particular issue here, is that when the container comes up, it is like a separate machine to us, but because we can't reach the controller we can't tell it that we can't reach it
<jam> cnf: consider if this were isolated machines (not containers)
<stokachu> jamespage: i pull the readme from https://jujucharms.com/openstack-base/ but it still says newton is the latest
<jam> if we ask maas to bring up a machine that can't talk to us, there isn't much we can do. The Controller couldn't 'ssh' in to find out why, cause we can't talk to it.
<jam> cnf: there is some stuff that we could special case for containers, cause we can poke at them from the host machine, but it is pretty fiddly and isn't something that we've gotten to yet
<cnf> jam: maybe something like "this is the juju network, always be in it"
<jam> cnf: again, its perfectly reasonable to have *routers* rather than need to be in the network
<jam> that's what gateways *do*
<cnf> jam: yeah, i don't _want_ my storage network routed
<jam> cnf: my home router knows how to get to 8.8.8.8, it doesn't live in 8.8.8.8 network
<beisner> stokachu, i believe we have ocata stable charm store bundle in flight.  with this cycle being short, and the first offset cycle, we kept newton as current stable in that published bundle, pending further verification.
<cnf> jam: yes, but your home network isn't a security domain
<jam> cnf: so, absolutely a choice you can make. and one you're choosing. but you can set up only routes to the Juju controller network
<jam> not to all possible public IPs
<jam> for example
<jam> or you can make your containers on the Juju network
<jam> which likely *is* routable
<jam> cnf: cause the Juju controller needs access usually to the outside world
<cnf> ...
<jam> cnf: so it depends what you mean by "I don't want my storage network to be routable"
<stokachu> beisner: ok cool
<jam> cnf: if every machine that is on the storage network is also on the externally-routable network, that sounds less secure
<jam> cnf: than setting up just a route from 172.STORAGE to 172.JUJUCONTROLLER
<cnf> it should be more clear on why shit isn't working, either way
<cory_fu> kwmonroe: :)
<jam> cnf: I absolutely hear you on that, and honestly, shit is hard to get right and hard to convey, and hard to probe and know what's going on.
<stokachu> ThiagoCMC: re: openstack version, it is Newton, but Ocata is pending
<jam> cnf: we are making improvements here, but we don't do everything everywhere yet
<jam> anyway, need to go take my son to bed
<jam> bbaiab
<cnf> cannot deploy bundle: cannot deploy application "cinder-ceph": subordinate application must be deployed without constraints
<cnf> ffs
<cnf> ok, lets see how this behaves
<cnf> hmm, juju doesn't add interfaces to existing containers, it seems
<cnf> k, destroying, and deploying again
<cnf> and then off home
<admcleod> ThiagoCMC: did you want to deploy this on 'some machines' or just on a machine with lxd for testing/dev/playing around?
<Budgie^Smore> lazyPower is there a reason why after an a fresh install there is 3 heapster replica sets?
<ThiagoCMC> admcleod, at first, I want OpenStack in one box. So, yes, for testing... I want to go Juju only but, looks like that conjure-up is required to start this thing more easily...
<ThiagoCMC> Later, I'll go ahead and try to deploy it in my MaaS environment...
<ThiagoCMC> I already have MaaS up and running, with 20 bare-metal servers...
<ThiagoCMC> But I need to fully understand Juju first.
<admcleod> ThiagoCMC: well, if you wanna do it on one box with lxd and ocata, the openstack-on-lxd link will work for that
<ThiagoCMC> And conjure-up "masks" it...
<ThiagoCMC> admcleod, that's nice!
<ThiagoCMC> do you have some link?
<ThiagoCMC> with docs or quick and dirty guide?  =P
<admcleod> ThiagoCMC: https://docs.openstack.org/developer/charm-guide/openstack-on-lxd.html
<beisner> ocata is also WIP there, but there's a branch
<ThiagoCMC> nice!
<admcleod> ThiagoCMC: the doc refers to a git repo with bundles, the ocata bundle is in a branch called 'add-ocata' (for now, we will merge that soon)
<admcleod> what beisner said
<ThiagoCMC> I'm really interested in Ocata, Senlin, Aodh, Ceilometer and Gnocchi! I already have those things working but, I manually installed everything via "apt-get".
<ThiagoCMC> Juju might be the way to go! I believe...
<ThiagoCMC> I even have senlin-dashboard! https://launchpad.net/~sandvine/+archive/ubuntu/cloud-archive-ocata  :-D
<ThiagoCMC> But it is too complicated to maintain a cloud with just a bunch of ansible playbooks and/or manual steps...
<lazyPower> Budgie^Smore: that seems like a bug
<lazyPower> Budgie^Smore: and i haven't seen that prior. can you paste me a kubectl get po,svc,rc --all-namespaces so i can see precisely what you're referring to?
<Budgie^Smore> yeah I am filing a big against CDK right now and you guys can funnel it the right places :)
<Budgie^Smore> oh and yes I can grab that out
<lazyPower> ta
<Budgie^Smore> lazyPower I will say it doesn look like it increases the number of pods just replica sets
<lazyPower> Budgie^Smore: ok, was this a fresh install? as in no upgrade, just a bare juju deploy cdk and this was the result?
<Budgie^Smore> lazyPower yes, the only changes I made were to constraint services to subnets and instance type
<Budgie^Smore> lazyPower interesting it doesn't seem to show up in the cli output
<Budgie^Smore> lazyPower: https://github.com/juju-solutions/bundle-canonical-kubernetes/issues/241
<ThiagoCMC> Well, "conjure-up openstack", after staring the OpenStack deployment, failed again: https://paste.ubuntu.com/24236284/
<ThiagoCMC> I gave up...   :-(
<lazyPower> Budgie^Smore: so CLI doesn't list the additional RC's that is ee, but it showing up in the dashboard?
<lazyPower> thats fun
<Budgie^Smore> lazyPower yeah!
<lazyPower> Budgie^Smore: can you kubectl describe those RC's that are ghosted?
<lazyPower> kubectl describe rc heapster-v1.2.0.1-907310421  for example
<Budgie^Smore> yeah definitely something to do with the dashboard and not kubectl!
<lazyPower> Budgie^Smore: ok, i see the follow up there too
<lazyPower> i wonder, if you refresh do they go away?
<lazyPower> the dashboard isn't socket based, so you dont get real time updates, its entirely possible that you loaded the dashboard while operations were happening and they weren't fully cleaned up during that page load
<Budgie^Smore> lazyPower nope
<lazyPower> oh well that nukes that idea out of the water
<lazyPower> thats a weird one
<Budgie^Smore> even did a ctrl+f5 to make sure it was a fresh pull
<lazyPower> if we can reproduce i'd say we need to file this against the dashboard project upstream, but barring that, you've found a fun one-off
<lazyPower> Budgie^Smore: when i'm done validating the etcd work i have in the pipeline i'll context swap and try to reproduce
<Budgie^Smore> lazyPower I vaguely remember seeing it in the last cluster... I am not sure if it is the dashboard project or the where the initial deployment happens
<Budgie^Smore> lazyPower no rush, it is justa weird one, probably could be low priority as it is just an oddity (unless I am missing any risks to leaving those there)
<lazyPower> Budgie^Smore: nah, if the api doesn't konw anything about it, you're proably fine
<lazyPower> Budgie^Smore: just for fun, kubectl get rs doesn't show them either right?
<lazyPower> replicaset vs replication controller
<lazyPower> for clarity, here's the upstream definition of the difference
<lazyPower> ReplicaSet is the next-generation Replication Controller. The only difference between a ReplicaSet and a Replication Controller right now is the selector support. ReplicaSet supports the new set-based selector requirements as described in the labels user guide whereas a Replication Controller only supports equality-based selector requirements.
<Budgie^Smore> lazyPower damn it we found them... oh yeah I like ReplicaSet / Deployment model :)
<lazyPower> ok, so i had suspected that might be the case too
<lazyPower> describe them and lets see just dubya tee fig
<Budgie^Smore> lazyPower, kubectl describe rs <rs name>?
<lazyPower> yep
<ThiagoCMC> stokachu,  "conjure-up openstack", after staring the OpenStack deployment, failed again: https://paste.ubuntu.com/24236284/
<Budgie^Smore> lazyPower yeah now I feel like I am being visited by Schrodinger's cat here!
<lazyPower> it both exists and doesnt exist at the same time eh?
<Budgie^Smore> lazyPower pretty much!
<lazyPower> heisenbug
<Budgie^Smore> lazyPower not sure if it has reach heisenbug level though
<lazyPower> Budgie^Smore: ok, i'll give this a reproduction attempt though
<lazyPower> there has to be a reasonable explanation as to why there are replicasets and replication controllers in there that i dont see when testing
<lazyPower> this might be related to our latest release, it might be some funkyness in the templates, it might also be a legit race bug where the addons are getting created and recreated
<Budgie^Smore> lazyPower I suspect the later
<kklimonda> can I disable juju unit/application for debugging, so it doesn't interfere with what I'm doing on the server?
<ThiagoCMC> admcleod, the following instructions also doesn't work: https://docs.openstack.org/developer/charm-guide/openstack-on-lxd.html
<ThiagoCMC> The "Juju Profile Update" is incomplete
<cnf> wow
<cnf> if i define constraints on my applications like jam said
<ThiagoCMC> I can't find those "lxd-profile.yaml" and "config.yaml" files...
<cnf> i get no hosts from MAAS
<cnf> hmm
<beisner> hi ThiagoCMC - i'
<beisner> hi ThiagoCMC - i've run through openstack-on-lxd verbatim a number of times w/o issue.  at what point are you hitting the issue?
<ThiagoCMC> beisner, at "Juju Profile Update", where that "lxd-profile.yaml" comes from?
<ThiagoCMC> Also, on next step, I can't find that "juju config.yaml" either...
<beisner> ThiagoCMC, did you complete the Host Setup section? https://docs.openstack.org/developer/charm-guide/openstack-on-lxd.html#host-setup   (ie. clone the repo)
<ThiagoCMC> There it is!!!  lol
<beisner> woot ThiagoCMC
<ThiagoCMC> ^_^
<bdx> does libjuju support `juju attach`?
<marcoceppi> bdx: good question, not sure, if it doesn't it should
<Budgie^Smore> lazyPower and I am back at getting AWS to behave :-/
<kwmonroe> hey petevg, is this matrixy thing of concern to you? http://paste.ubuntu.com/24237456/
<kwmonroe> petevg: specifically the "ha" bits.  we already talked about the "Could not run reboot" bits.
<petevg> kwmonroe: yep. That's a bug, and a bad one, too. Please file a ticket.
<kwmonroe> petevg: i'm asking you first because i'd love to blame cory_fu (https://github.com/juju-solutions/matrix/commit/7329196f6706e84c29b4cffa6cd64feb7c19f55e), but he already got the better of me when i tried to blame a layer-basic thng on his this mroning.
<kwmonroe> *morning even
<petevg> kwmonroe: no, that's my PR. That should be context.config.ha. Not sure why the tests didn't catch it :-/
<kwmonroe> let's still blame cory.
<petevg> ... it might just be a missing case. Bleh. I'll fix it asap, in any case.
<kwmonroe> petevg: mucho thanko: https://github.com/juju-solutions/matrix/issues/108
<petevg> This is my bad. He didn't catch it, though :-)
<petevg> kwmonroe: fix here https://github.com/juju-solutions/matrix/pull/109
<petevg> kwmonroe: it was an entertaining bug. The unit tests didn't catch it, because I was mocking out the context (unit tests don't test your assumptions!)
<petevg> ... and the functional tests didn't catch it, because it causes a failure only when the test has failed.
<petevg> So my "make sure this fails" tests were failing as expected.
<petevg> I should make the functional tests check for failure reasons at some point. For now, the above PR should fix my bad.
#juju 2017-03-24
<kjackal> Good morning Juju world!
<admcleod> kjackal: arent you a happy bunny
<kjackal> admcleod: what's up crazy boy?
<cnf> morning
<cnf> last day of the week, lets see if i can get somehting working
<admcleod> kjackal: crrrrrazyboy
<admcleod> cnf: fingers crossed!
<cnf> hmm
<admcleod> cnf: seeing your frustration makes me want it to work too
<cnf> admcleod: thanks :P
<admcleod> cnf: as the americans would say, we're "rooting for you". im not american though so thats pretty weird
<cnf> so, adding constraints to my stuff broke everything
<cnf> but i need to, or my lxd containers don't have networking
<admcleod> or, add a single static route to the controller?
<admcleod> ah you have no default gateways right. ok.
<cnf> right
<cnf> and adding a route means extending the vlans to a different point in the network
<admcleod> i wonder if you could add the controller to all the spaces
<cnf> well, no, afaik juju advertises the controller ip
<cnf> so if i add constraints: "spaces=space-maas" to an application
<cnf> maas no longer offers
<cnf> machines
<cnf> hmm
<admcleod> cnf: remind me again why the applications in lxd containers arent bound to space-openstack-mgmt?
<cnf> they are?
<cnf> they aren't to space-maas
<admcleod> oh, im looking at a status paste from yesterday rather than a bundle
<admcleod> ok
<cnf> hmm, and now maas is not generating logs o,O
<cnf> why did maas stop logging? o,O
<cnf> ok, that's fixed, i think
<cnf> ok, i'm deploying something
<cnf> we'll soon see what :P
<admcleod> heh
<joedborg> morning all.  I've got an issue bootstrapping juju to local LXD https://pastebin.canonical.com/183637/ it's building the container but then fails towards the end, saying it can't connect but the bridge is up and I can access the port via telnet
<admcleod> joedborg: did you try https://bugs.launchpad.net/juju/+bug/1633788/comments/14 ?
<mup> Bug #1633788: juju 2.0.0 bootstrap to lxd fails (connect to wrong "remote" IP address) <canonical-is> <eda> <juju> <lxd> <lxd-provider> <regression> <s390x> <uosci> <OpenStack Charm Test Infra:Confirmed for 1chb1n> <juju:Fix Released by axwalk> <https://launchpad.net/bugs/1633788>
<admcleod> joedborg: the last part atleast
<joedborg> admcleod: I can try the dev version and see
<admcleod> joedborg: more just the yaml mod
<joedborg> I did try the cloud def, but not with dev version
<admcleod> joedborg: what ver do you have instlled now?
<joedborg> admcleod: I had 2.1.2, now 2.2-a
<joedborg> admcleod: get the same issue
<admcleod> joedborg: presumably you've named the cloud in clouds.yaml the same as the name you're using to bootstrap
<joedborg> admcleod: yeah, so i just copied "lxd" name, can see that in the cloud list and then bootstraped with "lxd"
<admcleod> joedborg: and logs still show it trying to connect to .1 instead of controller ip from start of bootstrap?
<joedborg> joedborg: it was always trying to conenct to the right ip
<joedborg> admcleod: 2017-03-24 12:32:20 ERROR cmd supercommand.go:458 new environ: Get https://10.201.51.1:8443/1.0: Unable to connect to: 10.201.51.1:8443
<joedborg> admcleod: ah, i see what you mean
<admcleod> joedborg: i thought your controller was .123?
<admcleod> right
<joedborg> admcleod: this is the IP of LXD
<joedborg> admcleod: the bridge connector
<admcleod> joedborg: oh yeah. :)
<joedborg> admcleod: which i can access and port 8443 is open
<joedborg> admcleod: so I think what ever's on that port isn't replying as juju expects
<admcleod> joedborg: so as beisner asked, proxy set anywhere? lxc config?
<admcleod> mm
<joedborg> https://www.irccloud.com/pastebin/aK1Xhdmk/
<admcleod> what ver lxd?
<joedborg> 2.12
<joedborg> admcleod: it updated itself recently
 * admcleod updates
<admcleod> joedborg: what repo did you get 2.12 from?
<joedborg> admcleod: xenial repos i believe
<joedborg> admcleod: now checking, but don't think i have any PPAs
<joedborg> admcleod: I tell a lie
<joedborg> admcleod: deb-src http://ppa.launchpad.net/ubuntu-lxc/lxd-stable/ubuntu xenial main
<admcleod> joedborg: im confused, 2.12 isnt an lxd version i can see anywhere, nor lxc 2.12..
<joedborg> admcleod: $ lxd --version
<joedborg> 2.12
<joedborg> $ lxc version
<joedborg> 2.12
<admcleod> joedborg: apt-cache policy lxd ?
<joedborg> https://www.irccloud.com/pastebin/M0nDXowp/
<joedborg> admcleod: ^^
<admcleod> joedborg: ..installing...
<joedborg> admcleod: ta
<admcleod> joedborg: bootstrapping
<joedborg> admcleod: did lxd launch work for you post updatE?
<admcleod> joedborg: lxd? launch? lxc launch yes..
<admcleod> joedborg: back in 15
<joedborg> admcleod: yeah i meant that.  that didn't work for me until i purged and reninstaled
<joedborg> admcleod: (y)
<admcleod> joedborg: bootstrap OK
<gabriel> hi guys
<gabriel> I've a question on how to rename a charm which has already been pushed ?
<gabriel> In a same way, is it possible to remove a charm ?
<joedborg> admcleod: with juju 2.1.2?
<gabriel> yes
<admcleod> joedborg: yep
<zeestrat> gabriel: I think rick_h mentioned that the remove API is under construction. Search for "remove a charm" in https://irclogs.ubuntu.com/2017/03/23/%23juju.html
<rick_h> zeestrat: gabriel yes, the delete API is in testing and not in the cli for end users yet.
<rick_h> gabriel: just repush it to the new name and file a bug for the delete and I'll get someone to peek at it
<gabriel> thanks!
<admcleod> joedborg: https://stgraber.org/2016/04/18/lxd-api-direct-interaction/
<admcleod> joedborg: just curl --insecure https://10.201.51.1:8443/1.0 should be enough to see if the api is reachable /functional from the controller
<joedborg> admcleod: let me check
<gabriel> rick_h: I just opened a support request to remove the old charm
<rick_h> gabriel: what's the link and I'll ping someone
<gabriel> rick_h: thank you very much, but I cannot find the support ticket link
<rick_h> gabriel: oic, that support link. Ok
<rick_h> gabriel: yea that goes through a new system for the jaas beta stuff. I'll have to see where that ends up. Thanks for the heads up
<gabriel> I see, thanks ! :)
<joedborg> admcleod: yeah that curl works fine
<admcleod> joedborg: i figured, since it looks more like an network error in your paste
<joedborg> admcleod: but i don't get what kind of problem because all the stuff it's trying to access is there
<admcleod> joedborg: *nod*
<gabriel> rick_h: thanks a lot, it looks removed !
<rick_h> gabriel: oh cool, that was fast
<cnf> ok
<cnf> next problem :/
<BlackDex> next challange you mean ;)
<joedborg> admcleod: just purged lxd and juju again (with the bridge and zfs pool), reinstalled with snappy and get the same issue.  so either it's a bug that only surfaces on my laptop's setup or there's some configs somewhere not being purged
<admcleod> joedborg: did you lxd attach to the container to check the connectivity to the controller? (lost my scrollback)
<admcleod> joedborg: s/lxd/lcx
<admcleod> lxc
<joedborg> admcleod: I can still curl it yeah
<admcleod> joedborg: weird
<joedborg> admcleod: yeah i suspect purging isn't removing a config file somewhere
<skayskay> mthaddon: would it make sense for apply-config-changes in mojo to check for template files that have changed?
<Budgie^Smore> hey lazyPower am I suppose to be able to us S3 storage out of the box with CDK regardless of infra?
<Budgie^Smore> lazyPower I am aware of having to setup a storage class, etc. but otherwise it should just work, right?
<lazyPower> Budgie^Smore: I think that depends on IAM profiles being assigned to the ec2 units correct?
<Budgie^Smore> lazyPower don't think so, cause what if I wanted to build my own bare metal backed by s3 storage, ec2 doesn't come into it
<lazyPower> so this backed by s3 is whats confusing me
<lazyPower> s3 isn't listed as a supported volume type
<lazyPower> in what context are you using s3 sotrage
<lazyPower> *Storage
<Budgie^Smore> lazyPower does there need to be IAM policies, roles, other access issues overcome - probably but nothing to do with ec2... persistent volumes
<lazyPower> Budgie^Smore: so if you're talking persistent volumes, you're referring to things like EBS, GCEPD, CinderVolumes, CephRBD, or gluster/nfs shares
<lazyPower> last i checked, object stores were not viable backend volume types
<Budgie^Smore> lazyPower EBS falls under S3
<Budgie^Smore> kinda
<lazyPower> not really
<lazyPower> EBS is a block device
<lazyPower> s3 is an object store
<jrwren> EBS snapshots are stored in S3. so if you use `ec2-create-snapshot`, you are copying your EBS to S3.
<Budgie^Smore> OK right now that is a semantics arguement at worst... I should be able to use EBS PV including using PVC to create it running on bare-metal
<Budgie^Smore> right?
<ybaumy> how can i add ssh key to machines
<ybaumy> a  new one
<ybaumy> i added the key to the model
<ybaumy> but i want it to distribute it to all machines and lxd containers
<rick_h> ybaumy: have to ssh-copy-id them to the machines and containers. ybaumy or do a juju run of 'ssh-import-id' maybe
<ybaumy> rick_h: ok that works
<ybaumy> thanks
<lazyPower> marcoceppi: https://github.com/juju-solutions/layer-etcd/pull/85
<gully-foyle> hey guys, what're some things you don't like about juju and what can be improved?
<cnf>  i have a list...
<gully-foyle> well let's see it
<cnf> https://bugs.launchpad.net/juju
#juju 2017-03-25
<bryanruiz> hi.. i have a manually provisioned machine and im trying to change the ip address of it
<bryanruiz> how do i go about doing that?
<bryanruiz> the ip changed because of a new dhcp server (router)
<bryanruiz> i cant seem to find the juju env commands in 2.1 to be able to run this script: https://gist.github.com/niedbalski/ffecd47ac9620a1d425a
#juju 2017-03-26
<ybaumy> can you add to juju gui the ability to set constraints? or am i missing something
#juju 2018-03-19
<SuneK_> .
<bdx> kwmonroe: sup pinggg
<bdx> kwmonroe: per the upstream "hadoop-spark" bundle, wondering what the purpose of having the hadoop-client charm deployed to the same machine as spark charm is?
<bdx> from what I can tell the spark charm includes the hadoop-client layer
#juju 2018-03-20
<gnuoy> Can I specify a relative local directory as the charm source in a bundle ?
<admcleod_> gnuoy: pretty sure you can
<gnuoy> Everything I try results in one form of explosion or another eg https://paste.ubuntu.com/p/rdFhVWn6kg/
<gnuoy> Fully qualified paths seem fine
<admcleod_> gnuoy: try specifying the series as an option
<admcleod_> gnuoy: for the vault charm
<gnuoy> admcleod_, hurray \o/
<gnuoy> admcleod_, thank you
<admcleod_> gnuoy: it looks like it assumes a series, so charm: xenial/vault would work if you also had ./xenial/vault
<gnuoy> that didn't seem to work
<admcleod_> gnuoy: i know ive done something like that in the past but perhaps its changed
<admcleod_> gnuoy: i guess series specification isnt a huge issue
<gnuoy> I mean xenial/vault didn't, the series option is golden
<gnuoy> admcleod_, not to me, I'm happy to restate it
<TheAbsentOne> How can one use a sql file in a hook to create and populate a database with the reactive framework properly? Or is there an option in the mysql charm itself for this?
<rick_h_> TheAbsentOne: so normally there'd be some flag set that the db is up and ready and at that point in time you'd use whatever db library to connect with the information stored from the relation and such and write out the data to the database.
<TheAbsentOne> yeah got to that point, I'm just not that familiar with how python can achieve this properly but Im testing things out right now, thanks rick_h_
<rick_h_> TheAbsentOne: what db is it?
<TheAbsentOne> the app will use and need both a mysql and mongo connection but for now I'm focussing on mysql
<rick_h_> TheAbsentOne: check out https://github.com/PyMySQL/PyMySQL
<rick_h_> TheAbsentOne: you should be able to add it to the requirements file for the charm I think and then use that from within the hooks
<TheAbsentOne> ohn nice that should work rick_h_ thanks for leading me somewhere! ^^
<rick_h_> TheAbsentOne: no problem, have fun!
<TheAbsentOne> rick_h_: still here by any chance? If 1 charm needs both a mongodb and a mysql charm for example can juju handle this like the first method or doe I need the second way (https://pastebin.com/0GbiHFE4)
<magicaltrout> you'd need the second TheAbsentOne
<TheAbsentOne> I thought so, and can the charms from the charmstore handle this? magicaltrout
<magicaltrout> i'm not really following what went before.... handle what?
<magicaltrout> i'm working on a jdbc relation for example that wouild sorta evenually do what your first example did, but to my knowledge, if you want mysql and mongo related in one charm, that'll be fine as long as you have 2 hooks and tell them what to do
<TheAbsentOne> there's nothing special actually. (It's for a POC) I'm gonna use the apache php layer and I have a few php files I want to deploy as a service. One of these files reads from a mongodb and anoher from a mysql one
<magicaltrout> yeah thats fine, you'll have 2 functions one for mysql available and the other for mongo available
<TheAbsentOne> are you really working on that? That's basicly one of the subjects of my dissertation here! :o
<magicaltrout> well actually you could nest it in one probably
<magicaltrout> buit from a usability point of view you'd probably want to feed back to the user if one service wasn't available and the other was
<TheAbsentOne> yep that was my first step and in a next iteration I kinda want to build a "generic type of database"-charm
<TheAbsentOne> correct!
<magicaltrout> i've got an apache drill charm so i've got a jdbc relation
<magicaltrout> thats generic enough to do stuff
<magicaltrout> and then my ui charm can accept that
<magicaltrout> from mysql, drill, postgres or whatever
<TheAbsentOne> is it on github? I might learn a stuff or two
<magicaltrout> although i've not upstreamed mysql support yet
<magicaltrout> maybe let
<magicaltrout> me look
<TheAbsentOne> if you are interested btw, I did some basic writing on a gitbook about my research: https://ciberth.gitbooks.io/logboek-thesis/content/problem.html
<magicaltrout> https://gitlab.com/spiculedata/juju/interface-jdbc
<magicaltrout> doesn't do much currently other than offer up some jdbc details
<magicaltrout> borrowed mostly from the mysql interface
<TheAbsentOne> still usefull! Thanks a lot
<TheAbsentOne> It's pretty hard finding my way through all the documentation and examples
<TheAbsentOne> as a novice
<magicaltrout> tell me about it
<magicaltrout> the only way i understood anything was from sitting with kwmonroe and cory_fu for about 6 hours in Gent
<magicaltrout> turns out kwmonroe hacks for beer
<TheAbsentOne> It's actually funny how we have this big paradox. We want to simplify things by sometimes adding complexity
<TheAbsentOne> in Ghent? You are from Belgium? :)
<magicaltrout> nope
<magicaltrout> but if someone pays
<magicaltrout> i'll go anywhere
<TheAbsentOne> hehe as a matter of fact I'm from Ghent city xD
<magicaltrout> there was a juju track at devops days twice
<magicaltrout> so we all met up there
<TheAbsentOne> and at configmngmtncmp a few years back, that's where I got to know juju
<magicaltrout> yeah
<magicaltrout> i was lying you got the correct name :P
<magicaltrout> i will have been lurking at the back somewhere
<TheAbsentOne> hehe
<TheAbsentOne> what can cause an unknown status and workloud even if every function has a status_set: https://github.com/Ciberth/mycustomcharms/blob/master/firstiterations/step0/reactive/step0.py
<TheAbsentOne> loggin in with ssh shows that he didn't even install apache and debug hooks doesn't really work, as there is no hooks folder or anything yet
<rick_h_> TheAbsentOne: that means you forgot to run charm build and to use the built charm to deploy
<rick_h_> TheAbsentOne: a common error that there's definitely a bug about warning a user when no hooks found
<rick_h_> TheAbsentOne: have to get into the habit of checking charm build and charm proof to sanity check before deploying/upgrading/testing
<TheAbsentOne> oh god you are right I build it but I still ran the well not the builded one xD guess I'm tired. Thanks rick_h_ I could have lost a lot of stupid time here
<TheAbsentOne> charm proof said everything was fine xD
<rick_h_> TheAbsentOne: yea, I usually end up with one terminal in the charm code directory and one in the build directory so I split the work of editing/etc of the charm and deploy/juju interactions while testing
<TheAbsentOne> that's exactly what I'm gonna do, I feel so stupid lol
<magicaltrout> i've done that about 6000 times
<rick_h_> don't, you don't know how many times I've done that
<rick_h_> well...not sure it's 6k but magicaltrout is generally more #winning than me :P
<TheAbsentOne> yeah kinda figures that it happened to others before
<rick_h_> kwmonroe: around? having issues auth'ing to the k8s webui and curious if I'm missing something
<kwmonroe> yup rick_h_
<kwmonroe> rick_h_: do you see the login prompt on the webui, or does it show some gobbledeegook json/xml?
<rick_h_> hey, so I have the proxy going and I go to my /ui URL and get redirected asked about uploading a config file or using a service account token
<kwmonroe> rick_h_: SKIP
<rick_h_> kwmonroe: so I point it at my .kube/config and it says "Authentication failed. Please try again."
<rick_h_> oooooh...ummm...wtf?
<kwmonroe> isn't there a skip option?  i never use that config file doohickey
<kwmonroe> just type admin/<password> and go about your business
<rick_h_> yea, fair enough, there is a skip. I guess I kind of read that like "GO" and "Cancel" and didn't catch it was skip
<rick_h_> I didn't even have to do that much
<rick_h_> it took me straight to the dash
<kwmonroe> to be fair rick_h_, i have no idea what that token/config would allow you to do that skipping prevents.
<rick_h_> oh hmm, but a crossed out head in the corner
<rick_h_> if I click that it'll bring back up the dialog
<rick_h_> kwmonroe: so where do I go to auth with user/pass?
<rick_h_> kwmonroe: if I go to "settings" in the dashboard I get a 403
<kwmonroe> hm, you should have been prompted for a user/pass first, then you'd get the "upload a config / token" -- the latter is what i skip
<rick_h_> kwmonroe: nope
<kwmonroe> rick_h_: did you conjure-up this thing, or juju deploy?
<rick_h_> kwmonroe: juju deploy canonical-kubernetes
<rick_h_> well, I had to download the bundle to put etcd on containers so I can fit it in 6 machines :/
<rick_h_> but same diff
<kwmonroe> rick_h_: does "kubectl cluster-info" give you output that says "k8s dashboard available at:  blah"?
<rick_h_> kwmonroe: rgr it does
<kwmonroe> and is that the URL you're trying to access?
<rick_h_> kwmonroe: no, I ran kubeproxy and went to the 127. address
<kwmonroe> because maybe you're trying to hit the master/ui directly versus the proxy
<rick_h_> per the instructions :)
<kwmonroe> oh mylanta, didn't know we had instructions
<kwmonroe> ryebot: help this poor soul ^^
<rick_h_> ah yea, hitting the url directly w/o proxy has a basic http dialog
<ryebot> lookin'
<kwmonroe> ok - try admin and the password from ~/.kube/config-$JUJU_MODEL
<rick_h_> hah, so I can login with my .kube/config file except that it doesn't really log me in
<rick_h_> it let's me through the htaccess stuff but I still get 403 on settings/etc
<kwmonroe> you have to login, then click skip, then physically skip around the room, then press cancel.
<rick_h_> kwmonroe: so I argue one cannot login to the dashboard properly
<kwmonroe> lol
<rick_h_> kwmonroe: ooooh, I forgot the cancel part
<kwmonroe> your argument is noted
<rick_h_> :P
<ryebot> rick_h_ use the kubeconfig from /root on a k8s worker
<rick_h_> ryebot: not the instructions "juju scp kubernetes-master/0:config ~/.kube/config" ?
<rick_h_> that's the config I'm using, is the worker ones different?
<ryebot> there's two sets of credentials, for whatever reason the ones in /root on a worker get me logged in
<ryebot> they're probably somewhere on the master too, but idk where
<rick_h_> ryebot: /root/cdk/kubeconfig ?
<rick_h_> ryebot: there's no config in /root
<rick_h_> ryebot: my /root looks like "cdk  curtin-install-cfg.yaml  curtin-install.log  snap"
<ryebot> rick_h_ yeah that sounds right, sorry going off memory here
<rick_h_> ryebot: gotcha, ok
<rick_h_> ryebot: ah, yea the kubeconfig is a service token vs a user account
<ryebot> it's ... a little screwy and kinda grew into that organically. I think we have a task to make that less awful.
<rick_h_> ryebot: hmm, ok. I'm working out how things work so I'll tinker and find the service account stuff and see if I can list/create them and such then
<rick_h_> so thanks for skip kwmonroe and ty ryebot for saying I'm not holding it wrong
<ryebot> xD
<kwmonroe> does that mean you're in rick_h_?
<tvansteenburgh> we should update the instructions
<tvansteenburgh> and we should not be telling users to auth as service accounts :P
<tvansteenburgh> i'm trying to remember why we used to kubectl proxy to reach the dashboard
<tvansteenburgh> i haven't done that in ages
<tvansteenburgh> although i would expect it to work with basic auth anyway
<knobby> tvansteenburgh: what is your preference? an ingress or loadbalancer depending on substrate?
<tvansteenburgh> knobby: well with cdk the lb is already there
<knobby> tvansteenburgh: so the load balancer for the api server would pull double duty to pass off to the dashboard service?
<tvansteenburgh> eh, doesn't it already? if i kubectl cluster-info, my dash url is the lb ip address
 * knobby isn't certain
<knobby> I'm just trying to figure out what the goal is here
<tvansteenburgh> knobby: the problem is that rick_h_ followed our bundle instructions, which tell you to run kubectl proxy to access the dash, but then he discovered he couldn't basic auth using that method
<tvansteenburgh> i haven't tried this myself yet
<tvansteenburgh> i can't remember the last time i used kubectl proxy though
<knobby> tvansteenburgh: right, I get that the general problem is that the CDK instructions aren't as good as they should be for the dashboard and I'd like to see what we need to do to fix it. But I think step one there is to define the ideal experience. How do we talk to the dashboard? Is it an ingress, through a lb, or hidden behind kube proxy? Then we work towards that. kjackal has done some work here recently and
<knobby> might have some opinions...
<knobby> tvansteenburgh: I don't use my dashboard either and that hurts my perspective here
<rick_h_> tvansteenburgh: the bigger issue is that you cannot log into the dashboard login UX with your config
<rick_h_> tvansteenburgh: I'm not sure how that's supposed to work, but the proxy just bypasses the http basic auth and you end up at the dashboard and logging into that itself doesn't work
<rick_h_> kwmonroe: so no, I'm into the main dashboard but as an anon user vs as "admin"
<rick_h_> that's why things like the settings nav menu give me a 403 (I assume)
<rick_h_> kwmonroe: tvansteenburgh let me know if you want me to screenshare what I mean if I'm not being clear
#juju 2018-03-21
<magicaltrout>  so
<magicaltrout> the juju gui
<magicaltrout> tells me 2 charms are blocked
<magicaltrout> juju status tells me all is green
<magicaltrout> on jaas
<magicaltrout> who broke it?
<rick_h_> magicaltrout: hmm, reload the browser? maybe something got missed in the websocket coms?
<magicaltrout> did that
<magicaltrout> went back to the control page went back to the model
<magicaltrout> etc
<zeestrat> Hey rick_h_, did you ever manage to scrounge together any of the backend folks regarding CI and the charm store?
<rick_h_> zeestrat: yes, sorry. one sec
<rick_h_> magicaltrout: so where's it say it's blocked? In the inspector floaty box thing on the left?
<rick_h_> zeestrat: so, we were at a sprint and we had some sessions on it and so the team's working on changes to enable what has been called "agents" that we've used for communication between microservices but allowing users to create and manage agents themselves
<rick_h_> zeestrat: so that's ongoing work from the sprint the teams doing and some of the terms might change up as we move from an internal details to a feature for all users
<rick_h_> zeestrat: when we've got something test-able you can be sure I'll reach out and show off to folks in a juju show/blog post about using it
<magicaltrout> rick_h_: https://imagebin.ca/v/3vetcUjfVsrQ
<magicaltrout> https://imagebin.ca/v/3vettfOo8Psf
<rick_h_> magicaltrout: bugging hatch (who's not in here that punk) to see if he knows anything to check there.
<rick_h_> magicaltrout: sounds like a buggy bug, but not sure how it got into that state
<magicaltrout> thanks rick_h_
<rick_h_> gotta love it "did he reload" yea...we got that
<zeestrat> rick_h_: Thanks. That sounds good. Please don't hesitate to ask if they would like some direct feedback on use cases before things land.
<rick_h_> zeestrat: so the goal is going to be to expose the current stuff as a form of unblocking with a path forward that's cleaner I think.
<rick_h_> zeestrat: so there will be some room for testing stuff out and then refining
<rick_h_> zeestrat: otherwise we're starting from scratch which will take longer
<rick_h_> magicaltrout: so he's going to check something and get back in a bit. Are the images ok to go into a bug?
<rick_h_> magicaltrout: though I think he's going to ask for the actual wss coms to see if it's juju lying or us
<magicaltrout> yeah i'm just trying to finish off the jaas bundles so nothing top secret in there
<rick_h_> magicaltrout: ok
<rick_h_> magicaltrout: do you know how to view the web socket messages in chrome?
<magicaltrout> hmm
<magicaltrout> it seems to have sync'd up rick_h_
<magicaltrout> cause i refreshed for the websocket stuff
<magicaltrout> and not it all green
<magicaltrout> not much to show any more =/
<rick_h_> magicaltrout: not/now?
<magicaltrout> er
<magicaltrout> yeah
<rick_h_> magicaltrout: hmm, yea my bet is that the websocket that updated the status  got missed/dropped somehow
<magicaltrout> fair enough
<magicaltrout> other random question
<rick_h_> magicaltrout: and that a fresh stack with the current status cleared up the confusion
<magicaltrout> is it just me who feels the import export icons and the wrong way around? :)
<rick_h_> magicaltrout: lol, issue filed. I always have to double check. I'll have to bug the design folks again as it's been a while
<rick_h_> magicaltrout: at one point they were looking at more clear images to use
<zeestrat> rick_h_: Gotcha. Looking forward to it. Thanks again!
<bdx> cory_fu, reactive-peeps: https://github.com/juju-solutions/charms.reactive/issues/new
<bdx> GEHHHH
<bdx> https://github.com/juju-solutions/charms.reactive/issues/166
<cory_fu> :)
<cory_fu> bdx: Can you not use venv?  And does it fail if you do?
<bdx> yeah, not using venv
<bdx> it seems to fail in both situations
<cory_fu> bdx: I feel like I ran into this before and found some workaround
<cory_fu> But I don't recall what it was
<cory_fu> bdx: Can you try adding setuptools_scm to the start of the wheelhouse.txt in a copy of layer:basic?
<bdx> yes
<cory_fu> bdx: Is this failing during build or during deploy?
<bdx> cory_fu: deploy
<cory_fu> bdx: From this comment https://github.com/pypa/pypi-legacy/issues/322#issuecomment-291733676 it sounds like we need to bump the pip version in layer:basic's wheelhouse.txt: https://github.com/juju-solutions/layer-basic/blob/master/wheelhouse.txt#L1
<bdx> cory_fu: nice find!
<cory_fu> bdx: Would you mind testing that and submitting a PR if it works?
<cory_fu> I'm in the middle of something else
<bdx> np
<cory_fu> Thanks!
<bdx> cury_fu: with your workaround https://paste.ubuntu.com/p/DG7PC9DGHj/
<bdx> looks like its still failling down the line
<bdx> I'll play with upgrading pip now
<bdx> updating to pip>=9.0.2 still gives the same error
<bdx> when I build the charm its using pip-10.0.0.dev0-py3.5.egg
<bdx> https://paste.ubuntu.com/p/pYDyCdtfFg/
<bdx> I'm guessing I need to find a way to get that version of pip into the wheelhouse of layer-basic then
<bdx> that pip looks like it ships with the snap https://paste.ubuntu.com/p/ZdcYwJ2Yry/
<bdx> hmmmm
<bdx> there is some jiggery going on here
<bdx> cory_fu: what have you gotten me into :)
<cory_fu> lol
<cory_fu> bdx: Have you tried specifying pip==10.0.0dev0 explicitly?  I'm not sure if that will allow dev versions or if you have to get charm-build to use --pre when building the charm
<bdx> no, neither
<cory_fu> bdx: Though, I don't understand why it's still failing with 9.x
<bdx> I don't see that version as a release of pip, or a branch, or anything about it in the pip repo ... I'll try what you've suggested
<cory_fu> bdx: I suspect that using pip==10.0.0dev0 won't work, anyway.  I wonder if we need an explicit `pip install setuptools_scm` in layer:basic just prior to it trying to install the wheelhouse.
<cory_fu> I swear I remember hitting this before and having some sort of work-around
<bdx> yeah it barfed on the version https://paste.ubuntu.com/p/8xP92Q87bv/
<bdx> cory_fu: do you know how, and or why 10.0.0dev0 is used?
<cory_fu> bdx: Yeah, it's used because of https://github.com/juju/charm-tools/blob/master/charmtools/build/tactics.py#L819-L821
<cory_fu> bdx: Which points to https://github.com/pypa/pip/blob/master/news/4320.bugfix
<cory_fu> Which is something we specifically had to fix for the snap
<cory_fu> Wait, we didn't fix that, someone else had first, that's right
<cory_fu> Anyway, it was something we hit in the snap
<bdx> ok
<bdx> I'm wondering if I use non-snap version if I would still hit this .... testing
<bdx> fails the same no matter what version of charm I build with ... don't think thats the issue
<bdx> cory_fu: https://paste.ubuntu.com/p/7hNPqVvmQc/ - could this be the root of this evil?
<cory_fu> bdx: It could be but you said it happened even when using a venv
<bdx> totally, doesnt the venv get system packages?
<bdx> oooh, thats another option possibly
<cory_fu> bdx: Yeah, there's a separate option for include_system_packages which defaults to false
<cory_fu> bdx: Can I ask why you can't use a venv?  It's a better idea in general
<bdx> cory_fu: https://github.com/juju-solutions/layer-basic/blob/master/layer.yaml#L14
<cory_fu> bdx: Oh, yes, I'm dumb.  I forgot we went with true by default to make it more backwards compatible
<bdx> cory_fu: I have cron jobs that the charm sets up that need to run at the system level
<cory_fu> Try changing that to false and use_venv to true and see if it helps
<cory_fu> bdx: Just run them with charm-env
<bdx> oh I can use that in any python file?
<cory_fu> #!/usr/local/sbin/charm-env python
<bdx> the shebang?
<cory_fu> Yep
<bdx> oh saaweeeet!
<cory_fu> bdx: Hrm.  I just realized, though, that if it doesn't have the hook context clues, it will have to guess on the charm path.  It will be fine if there is only one principle, and if your charm isn't a subordinate, but I might need to add a way to specify which charm, to be more explicit
<bdx> ahh totally, its a principle
<bdx> good to keep in mind though
<cory_fu> Ok, so as long as you're not hulk-smashing charms together, it'll be fine.  And I'll add an option to disambiguate
<bdx> ok sweet
<bdx> cory_fu: I still got the error after changing include_system_packages to false and use_venv to true, I'm trying another deploy with setuptools specified in my wheelhouse.txt
<cory_fu> bdx: Pretty sure that specifying it in wheelhouse.txt will be too late.  I think it has to be an explicit call to pip in layer:basic just before the wheelhouse install
<bdx> ahh ok
<cory_fu> i.e., right before this line: https://github.com/juju-solutions/layer-basic/blob/master/lib/charms/layer/basic.py#L81
<cory_fu> Actually, no.  Before line 85
<cory_fu> https://github.com/juju-solutions/layer-basic/blob/master/lib/charms/layer/basic.py#L85
<cory_fu> bdx: I have to head out for the evening.  Good luck, and let me know if you figure anything out
<bdx> will do
#juju 2018-03-22
<roadmr> hey juju folks. Why would a formerly-working juju unit which has nothing to do with lxd suddenly start spitting this out ? 2018-03-22 15:32:32 ERROR juju.worker.proxyupdater proxyupdater.go:153 can't connect to the local LXD server: LXD socket not found; is LXD installed & running?
<pmatulis_> roadmr, odd. what juju version and what unit/application? is LXD even installed?
<roadmr> pmatulis_: it's not because it's not needed afaik. it's on machine/0 on a juju 2.2.8 setup
<roadmr> pmatulis_: (wendigo)
<jose-phillips> questions guys if i want to upgrade from 16.04 to 18.04 and i have juju packages how i can upgrade this packages that runs on containers?
<balloons> jose-phillips, you should wait at least until bionic is stable
<jose-phillips> balloons i know is just a question to know how is the procedure
<balloons> jose-phillips, https://jujucharms.com/docs/2.3/howto-updateseries
<hml> jose-phillips: this hasnât been tested for xenial to bionic, you may run into problems.
<hml> jose-phillips: there are some networking changes in bionic
<jose-phillips> ok
<jose-phillips> im asking that because i have to upgrade openstack
<jose-phillips> to queens
<jose-phillips> i will do after bionic is stable
<jose-phillips> another questions
<jose-phillips> if i deploy something with juju in a container
<jose-phillips> how i can upgrade inside the container
<jose-phillips> the last apt-get
<jose-phillips> ?
<ryebot> Does the controller of a manually-provisioned node need to be the same arch as the node?
<ryebot> i.e., can I manually provision an arm64 node with an amd64 controller?
<hml> jose-phillips: is openstack deployed by charm?
<jose-phillips> yep
<hml> jose-phillips: i know the openstack charmers are working on upgrade scenarios etc.
<jose-phillips> for queens?
<hml> jose-phillips: juju is also working to make upgrade easier - but focused on trusty to xenial first, xenial to bionic second
<jose-phillips> Cool
#juju 2018-03-23
<Guest40677> ping
<lonroth_scania> Hello, we are at a workhop trying to get through a getting started juju development and fail to execute the tutorial for building the "layer-vanilla"
<lonroth_scania> Never mind about my last post. It was proxy things
<lonroth_scania> How often does maas pick up changes in hardware settings on servers?
<jhobbs> lonroth_scania: each time you run Commissioning
<lonroth_scania> But you don't want to re-commission a server just because you add - lets say - a RAM module.
<jhobbs> :)
<TheAbsentOne> Is there a list of all the hooks juju does when deploying an application. Also if I understand it correctly the reactive framework is just another hook right? I'm trying to create a diagram that shows the different steps juju performs.
<pmatulis_> TheAbsentOne, maybe https://jujucharms.com/docs/stable/authors-hook-environment
<TheAbsentOne> pmatulis_: yeah but that is not exactly what I am looking for, I browsed through that page already. Like is this diagram still relevant and correct: https://stackoverflow.com/questions/25159191/explain-about-juju-charms-hooks-file-execution-orders
<TheAbsentOne> The lifecycle of a charm would be the better term now that I think of it
<pmatulis_> unfortunately, i'm not an authority on the charm side of things
<TheAbsentOne> That's okay, maybe someone else knows something, thanks for trying to help me out though!
<zeestrat> lonroth_scania: yeah, that would be a bit annoying. I'd check with the guys in #maas and on the mailing list on what's best for that use case of upgrading hw.
#juju 2020-03-16
<wallyworld> and here's that forward port PR we discussed https://github.com/juju/juju/pull/11320
<thumper> moar coffee needed
<babbageclunk> I'm having trouble bootstrapping in lxd - get an error that it can't get mongodb-server-core. Anyone else seeing that?
<thumper> babbageclunk: which series?
<babbageclunk> bionic
 * thumper tries
<thumper> command is running, I'll let you know when it's done
<babbageclunk> I can install it fine on a fresh lxc instance
<tlm[m]> got it running also, no issues so far
<babbageclunk> I'm trying again
<babbageclunk> :(
<thumper> babbageclunk: new lxd controller started fine
<thumper> as did enable-ha on an existing controller
<tlm[m]> same problem babbageclunk , fails to install mongo
<tlm[m]> seems problem is with package libyaml-cpp0.5v5 not existing
<babbageclunk> tlm[m]: It looks like apt update is failing for me (with a name resolution error) and then everything else is a consequence of that.
<tlm[m]> no seeing any dns issues in my logs. Looks to be package missing for me
<tlm[m]> not*
<babbageclunk> tlm[m]: do you have squid-deb-proxy set up? maybe mine's gone rogue
<tlm[m]> Nup
<babbageclunk> yay worked it out
<thumper> oh?
<babbageclunk> I had an apt-http-proxy in my .local/share/juju/clouds.yaml entry for lxd, and I guess that proxy went away recently?
<babbageclunk> (wasn't to a local address)
<babbageclunk> I think it was the VPN squid, added at some point when there was a problem
<babbageclunk> and then not removed when the problem was fixed later.
<thumper> :)
<babbageclunk> can I get a review for a backport? https://github.com/juju/juju/pull/11322
<wallyworld> babbageclunk: looking straight after meeting
<babbageclunk> thanks
<wallyworld> babbageclunk: lgtm ty
<babbageclunk> wallyworld: thanks!
<flxfoo> Hi all,
<flxfoo> I would like to upgrade my percona cluster. So after pausing ha and non-leader member, I would upgrade and reboot. Then resume each of those non leader. Should I switch leader to upgrade the last one ? How would I do that?
<stickupkid> manadart, you ever seen this
<stickupkid> https://paste.ubuntu.com/p/TXd3pyfTvh/
<manadart> 2:15:58 INFO  juju.environs.manual.sshprovisioner sshprovisioner.go:43 initialising "10.132.183.234", user ""
<manadart> I think the endpoint needs to be user@addr.
<manadart> Maybe.
<stickupkid> documentation fail
<stickupkid> manadart, fixed that, but still fails https://paste.ubuntu.com/p/rDpCVmQvcd/
<stickupkid> manadart, i'll have a look later
<manadart> stickupkid: Try explicitly adding the key that you know will work to the SSH agent...
<stickupkid> manadart, https://github.com/juju/juju/pull/11317/files
<flxfoo> hi again
<flxfoo> `juju add-machine` does not find the image, how can change the image ID?
<stickupkid> manadart, I know this a draft and only targetting bootstrap, can you take a quick look to see if you agree before I move on to deploying path way
<stickupkid> manadart, might help if I add the PR https://github.com/juju/juju/pull/11324
<manadart> stickupkid: Commented. Looks OK.
<stickupkid> manadart, I'm struggling to workout the right thing for deploy as it serves two masters
<stickupkid> manadart, got a sec
<stickupkid> ?
<manadart> stickupkid: Daily?
<stickupkid> yeap
<hml> anyone know the difference in the branches for the go sys pkg:  release-branch.go1.14-std vs release-branch.go1.14-cmd
<hpidcock> hml: https://github.com/golang/go/commit/866920a07351ad5663cc712c31a0f7b5631bc85c
<hpidcock> std is what we want
<hpidcock> cmd is for the go tools
<hml> hpidcock:  given that we have to still compile with 1.11 and 1.12 for the near future.  moving pieces up to 1.14 seems problematic on all architures
<hml> hpidcock:  ty for the clairification
<hpidcock> hml: well then we should use the 1.12 pkgs
<hpidcock> but 1.14 pkgs for the most part should work on 1.10
<hpidcock> hml: actually 1.12 is no longer supported, so I think it's received it's final update
<hpidcock> ideally we use 1.13 pkgs if possible.
<hml> hpidcock:  we really need to get the darn releases off of the early versions, thatâs the hold up in my mind
<hml> we can build all we want with new stuffâ¦ but if the release or the snap is foobarâ¦ it doesnât matter
<hml> i know itâs in the works.
<hml> s/release/ppa
<hml> we have a bunch of other golang.org/x/* dependendies pinned to old stuff.  not just sys
<hpidcock> problem is we need to bump x/crypto
<hml> hpidcock:  iâve had trouble ensuring that just the sys pkg update will resolve the issue with s390x. iâm getting i/o timeouts on make rebuild-dependencies
<hpidcock> :/
<hml> hpidcock:  is a no win right now. with cypto and the bug fix
<hml> hpidcock:  review pls https://github.com/juju/juju/pull/11256
<hpidcock> hml: done
<hml> hpidcock: ty
<timClicks> babbageclunk: have a minute to chat vsphere?
<babbageclunk> timClicks: sure, in standup?
<timClicks> um yip 1 min
#juju 2020-03-17
<thumper> wallyworld: looks like the latest update didn't help the s390x build, still failing
<wallyworld> thumper: hmmmm, snaps built on lp
<wallyworld> ok
<thumper> https://launchpadlibrarian.net/469345132/buildlog_snap_ubuntu_xenial_s390x_juju-edge_BUILDING.txt.gz
<thumper> vendor/golang.org/x/crypto/chacha20/chacha_s390x.go:11:15: undefined: cpu.S390X
<thumper> I wonder if this is an additional flag that is set by a later version of the go compiler
<wallyworld> https://code.launchpad.net/~juju-qa-bot/+snap/2.7-edge
<wallyworld> https://code.launchpad.net/~juju-qa-bot/+snap/2.7-edge/+build/870841
<thumper> email six minutes ago
<wallyworld> build log shows correct/latest sha
<tlm[m]> wallyworld: do you have a link to the bug for pod DNS from last week ?
<thumper> hang on
<thumper> perhaps I'm looking at the latest develop build
<thumper> which won't have the fix yet
<thumper> it is juju-edge snap package
<thumper> not 2.7 edge
<thumper> wallyworld: I think we may be good then
<thumper> thanks for pointing this out
<wallyworld> all good, thanks for checking
<wallyworld> good to be sure
<thumper> wallyworld: do you agree that the failure above is because we don't yet have the 2.7 update in develop?
<wallyworld> thumper: let me finish standup
 * thumper nods
<wallyworld> babbageclunk: you happy to +1 my 2.7 merge pr?
<babbageclunk> wallyworld: oh, yup - just scrolling through it now
<wallyworld> ty
<babbageclunk> wallyworld: approved
<wallyworld> ty
<thumper> babbageclunk: hmm...
<thumper> machine-1: 15:49:17 WARNING juju.worker.globalclockupdater.raft timed out updating clock, retrying in 1s
<thumper> given that my machine isn't doing much...
<thumper> not sure why we'd get this
<babbageclunk> thumper: uhoh
<babbageclunk> let's look!
<babbageclunk> are you seeing it a lot?
<babbageclunk> or is it just the one?
<thumper> just once
<babbageclunk> was there a network blip that meant raft leadership changed?
<babbageclunk> it's not surprising in that case.
<thumper> hmm...
<babbageclunk> (or maybe just a machine being slow to respond to a heartbeat because compiling/running tests)
<thumper> it was deploying ubuntu-lite
 * thumper just thought of a really useful command addition
<babbageclunk> I mean, that doesn't sound like it would make the machine work too hard - it says lite right in the name
<thumper> if juju exec specifies --out, and there are multiple targets, we should write a file per target
<timClicks> +1
<thumper> so we could say "juju exec -m controller --all --out engine-report juju_engine_report
<timClicks> easy tee
<thumper> and end up with juju_engine_report.0 .1 and .2
<babbageclunk> is exec the new run?
<thumper> yeah
<timClicks> yes
<thumper> or...
<timClicks> that behaviour is similar to how wget works with --no-clobber
<babbageclunk> maybe treat the filename as a template
<thumper> --out engine-report-{}.txt
<thumper> and replace the {} with the machine id
<babbageclunk> jinx
<thumper> that'd be sweet
<thumper> perhaps add engine-report-0.txt.stderr if there is any stderr
<thumper> or something
<thumper> so... I wonder if I broke something in my status code...
<thumper> my machine is saying "pending"
<thumper> but it has a connection
<thumper> poo
<thumper> hmm...
<thumper> controller shows agents as "started"
<thumper> but my default model just showing "pending"
<kelvinliu> wallyworld: https://github.com/juju/juju/pull/11315 got this PR for CRD lifecycle, could u take a look when got some time? ty
<wallyworld> ok
<babbageclunk> thumper: did you work it out?
<thumper> no
<thumper> digging
<thumper> babbageclunk: meet?
<thumper> fark!!!
<thumper> babbageclunk: good news bad news kinda thing
<thumper> the machine agent really isn't running properly
<thumper> hence "pending"
<thumper> because the upgrade-steps-runner had issues
<thumper> unknown object type "Machiner" (not implemented)
<thumper> I don't even know how we ended up in this state...
<thumper> and how we haven't caught this issue earlier
 * thumper feels sad
<thumper> babbageclunk: ok found out my problem
<babbageclunk> oh nice
 * thumper is sending email
<thumper> not my fault
<stickupkid> manadart, got a sec?
<manadart> stickupkid: In Daily.
<stickupkid> manadart, thoughts about where to put the new series stuff, core/series seems like it has potential
<manadart> stickupkid: Yes.
<flxfoo> hi all,
<flxfoo> little question
<flxfoo> when upgrading ubuntu os (apt), holding a percona cluster, I upgraded the non leader first (pause, upgrade, reboot, resume). Do I need to move leader to another node, before to perform the upragde on current leadre node?
<flxfoo> if yes, how do I move leader onwership to another node?
<achilleasa> manadart: it's goimports that complains about already merged stuff on develop; I will push a commit to fix the linter warnings
<manadart> flxfoo: The upgrade process handles freezing leadership so that it doesn't thrash while performing series upgrades.
<manadart> achilleasa: OK.
<flxfoo> manadart: thanks, can you give more details? I am not upgrading percona or going one OS release up...
<manadart> flxfoo: Ah, I see. You are not using `juju upgrade-series ...`. If take the agent offline for longer than the lease period (up to a minute IIRC) a new leader will be elected.
<manadart> *If you..
<flxfoo> manadart: ah ok, I needed to wait longer than, for it to elect new leader... no way to force that then?
<flxfoo> *then
<manadart> flxfoo: No, it's in the hands of Raft consensus. You should be able to see it change in `juju status` if the agent is offline.
<manadart> achilleasa: Looks like you need to fix some import groupings there.
<flxfoo> manadart: by agent, you mean `mysql-hacluster` unit?
<flxfoo> manadart: if yes, offline does not mean `paused` but `stopped`?
<achilleasa> manadart: guess goimports is not to be trusted...
<flxfoo> manadart: if I `pause` the hacluster, it just stays like that... no election seem to appear in debug-log
<manadart> flxfoo: What is the charm you're using percona-cluster?
<achilleasa> manadart: I think it all looks correct now; can you take another look in case I missed an import?
<manadart> achilleasa: I think it looks OK now.
<flxfoo> manadart: yeah
<flxfoo> manadart: +hacluster
<manadart> flxfoo: If you definitely want the leader to relinquish while you upgrade it, try stopping the primary mysql/percona unit agent rather than using the pause command. I'm not terribly familiar with that charm, but that should do it.
<flxfoo>  manadart ok, if I understood properly, I could actually upgrade the non leader ones first (with pause etc... and resume), then shutdown the leader node (stop mysql) in order for an election to take place?
<manadart> flxfoo: Yes, I think so.
<flxfoo> manadart: thanks for your time :)
<manadart> flxfoo: Sure thing.
<hml> stickupkid: where did you find ineffassign and unconvert for static analysis ?  i saw a few private repros perhaps, but thatâs it
<stickupkid> hml, ineffassign - github.com/gordonklaus/ineffassign
<stickupkid> hml, unconvert - https://github.com/mdempsky/unconvert
<stickupkid> ha, nice that i kept my urls the same - hmm
<hml> stickupkid: okay, those are the ones i saw, just wasnât sure.
<achilleasa> hml: can you try rebasing 11318 on top of current develop? That should get rid of the un-related lint errors
<hml> achilleasa:  sure
<hml> achilleasa:  not, there are a few more changes to be made.  working on it now
<achilleasa> hml: left a small comment
<achilleasa> doing the QA steps now
<stickupkid> manadart, https://github.com/juju/juju/pull/11332
<hml> achilleasa:  pushed up changes to 11318.  need to remember to squash them before merging.  :-)
<achilleasa> hml: btw, when I tried the QA steps it failed to download (or was it upload) a resource
<rick_h_> stickupkid:  are you free to join back in daily?
<hml> achilleasa:  yeahâ¦ i saw that too.  i think itâs because we donât know how to run the charm correctly.  thatâs didnât seem to inhibit a few metrics from being collected.
<stickupkid> manadart, rick_h_ ping
<manadart> stickupkid: In Daily.
<rick_h_> stickupkid:  otp with interview
<achilleasa> hml: 11318 is approved
<hml> achilleasa:  ty
<hml> achilleasa:  do you have a few minutes?
<achilleasa> sure
<hml> achilleasa:  daily?
<achilleasa> omw
<stickupkid> manadart, hml, CR please https://github.com/juju/juju/pull/11332
<hml> stickupkid: looking at 11332
<sdhd-sascha> hi,
<sdhd-sascha> just trying to install `canonical's kubernetes` with juju. Inside of the worker-nodes i get:
<sdhd-sascha> `fs.go:540] stat failed on /dev/loop24 with error: no such file or directory`
<sdhd-sascha> And on the host i get this:
<sdhd-sascha> ```
<sdhd-sascha> $ losetup
<sdhd-sascha> /dev/loop24         0      0         1  1 /var/lib/snapd/snaps/lxd_13741.snap (deleted)           0     512
<sdhd-sascha> ```
<sdhd-sascha> What can i do, to search deeper about what is happen here ?
#juju 2020-03-18
<TimM[m]> Hi all, would anyone be interested in collaborating to charm up Jitsi and BigBlueButton?
<thumper> wallyworld: ping
<wallyworld> yo
<thumper> I think you have a bug in the 2.8 upgrade step for the tasks sequence
<thumper> shouldn't the tasks sequence be bumped for every model?
<wallyworld> ok, will fix, what's the issue?
<wallyworld> probs, i'll need to check
<thumper> do we create the task id from the sequence value +1 ?
<thumper> is that why we're bumping the sequence id?
<wallyworld> the sequence starting value and increment logic changed
<wallyworld> so to avoid doubling up when using 2.8 on a 2.7 db, the sequence was incremented
<wallyworld> otherwise the latest sequence number could have been used twice
<thumper> that didn't answer my question
<thumper> it doesn't make sense to me why we actually have this upgrade step
<wallyworld> because we went from calling sequence() to sequenceWithMin()
<wallyworld> so starting id is 1 not 0
<wallyworld> and if you compare the old logic vs new logic, running 2.8 would have reused an existing task id
<wallyworld> i haven't got the exact code handy
 * thumper is looking at it now
<wallyworld> the code has moved as well
<wallyworld> rather, additonal use of task sequence added
<thumper> ok
<wallyworld> action id used to add 1 to sequence()
<thumper> I do think that the upgrade step needs to be run across all models
<wallyworld> which started from 0
<thumper> right now it only does the controller model
<wallyworld> yeah, it does
<wallyworld> which is a bug :-(
 * thumper is writing an upgrade step now for unit machine id denormalisation
<wallyworld> drive by? :-)
<TimM[m]> 1st pass at a "juju personas" document https://discourse.jujucharms.com/t/juju-user-personas/2808
<jam> manadart: ping
<jam> or maybe stickupkid
<sdhd-sascha> Hi, i use juju with an lxd cluster. Now i remove one lxd-server from the cluster. Where can i find the configuration for the ip address ?
<sdhd-sascha> I searched and changed all occurrences at `~/.local/share/juju`
<sdhd-sascha> But i still get this error, when i try to remove or add unit's. Or when i try to call `juju storage`
<sdhd-sascha> `ERROR getting state: getting storage provider registry: Get https://...:8443/1.0: Unable to connect to: ...:8443`
<stickupkid> sdhd-sascha: is this the same one you signed up to?
<sdhd-sascha> stickupkid: it was the initial lxd-server from the cluster, where i later bootstrapped juju.
<stickupkid> sdhd-sascha, yeah, so I believe the issue is that we only know about that ip. To change the ip of that will require so mongo surgery. I think it's not unreasonable to make a bug for this.
<stickupkid> manadart, achilleasa_ that's correct ^
<stickupkid> ?
<sdhd-sascha> stickupkid: where can i find the mongodb?
<manadart> stickupkid: Yep.
<stickupkid> sdhd-sascha, https://discourse.jujucharms.com/t/login-into-mongodb/309
<sdhd-sascha> stickupkid: super. Thank you :-)
<stickupkid> sdhd-sascha, would you mind taking the time to file a bug? https://bugs.launchpad.net/juju/+bugs
<sdhd-sascha> stickupkid: yes, i will. Maybe i have time to create a patch, too
<stickupkid> sdhd-sascha, I'm unsure what's the best way to solve this, maybe the lxd provider/environ should help the instancepoller
<sdhd-sascha> Ah, ok. I will see
<sdhd-sascha> Thank you
<stickupkid> manadart, jam you seen this stack trace before ? https://paste.ubuntu.com/p/djJKY7dszN/
<jam> 2020-03-18 11:38:45 INFO juju.cmd supercommand.go:83 running jujud [2.7.5.1 9825e246a9ec70e6551744d033802f19d78cabce gc go1.14]
<jam> runtime: mlock of signal stack failed: 12
<jam> runtime: increase the mlock limit (ulimit -l) or
<jam> runtime: update your kernel to 5.3.15+, 5.4.2+, or 5.5+
<jam> fatal error: mlock failed
<jam> stickupkid: ^ I have not seen that before.
<jam> sounds like a bug in focal if I had to guess
<stickupkid> jam, neither have i
<manadart> stickupkid: Nope.
<stickupkid> jam, just wrapping up my focal PR, I'll see if it happens again
<stickupkid> manadart, CR my changes since thumper last reviewed - https://github.com/juju/juju/pull/11332
<stickupkid> manadart, also I need help testing if possible
<manadart> stickupkid: Yep. Gimme a few.
<hml> stickupkid: 1 line review pls?  https://github.com/CanonicalLtd/juju-qa-jenkins/pull/402
<stickupkid> manadart, whilst you're there - https://github.com/juju/juju/pull/11333/files#diff-e8543713fc0c30ea33131d41edf815e7R16
<manadart> rick_h_: Were we going to hang on and talk test?
<rick_h_> manadart:  yep my bad
 * rick_h_ got distracted
<hml> anyone noticed something wrong with changes to âjuju model-config logging-configâ
<hml> i updated logging-config value and it didnât take effect in my model, though shown in the model config.
<hml> had to bounce jujud for machine and unit agents
<hml> in develop
<danboid> Where is the juju user data stored eg credential/login stuff?
<danboid> or in other words, how do I move a juju account?
<rick_h_> danboid:  so it's in .local/share/juju
<rick_h_> danboid:  to move an accout the best thing is to just juju login to the controller from a new location
<rick_h_> danboid:  but if you need credntials/etc you need the .local/share/juju/credentials.yaml and clouds.yaml
<danboid> rick_h_, Thanks
<danboid> rick_h_, I presume a single juju user account be used from multiple machines then?
<rick_h_> danboid:  yes, normally if you create a juju user and give them as passowrd you can then juju login with that user/pass as long as you know the IP of the controller
<rick_h_> danboid:  so it's pretty easy to have multiple clients against a single controller
<danboid> Great, thanks
<stickupkid> rick_h_, my PR landed in 2.7
<stickupkid> rick_h_, and so has manadart's
<rick_h_> stickupkid:  woot woot, will watch for a ci run with that sha then ty!
<stickupkid> nps
<sdhd-sascha> hey, hi,
<sdhd-sascha> i tried to connect to the mongodb, but cannot find the correct password. Like, described here:
<sdhd-sascha> https://discourse.jujucharms.com/t/login-into-mongodb/309/5?u=sdhd
<sdhd-sascha> Hmm, i just inside an juju controller, but can't find the `init.d` or `systemd` startup of the `mongod` ? Who launches this daemon?
<rick_h_> sdhd-sascha:  those are started via systemd as long as you're on ubuntu >= xenial
<sdhd-sascha> rick_h_: thank you. I'm on 20.04 on the host. And the container is `bionic`
<rick_h_> sdhd-sascha:  this is juju bootstrapped on localhost?
<sdhd-sascha> yes
<rick_h_> sdhd-sascha:  so to access the contrller you do "juju switch controller; juju ssh 0"
<rick_h_> sdhd-sascha:  that puts you on the controller machine which has mongodb and jujud running
<sdhd-sascha> inside the controller, this command gives no output ... `# systemctl | grep mongo`
<sdhd-sascha> But, `pstree -p` shows that mongod is running...
<sdhd-sascha> Nope, if i try `juju ssh`, i have this:
<sdhd-sascha> ```
<sdhd-sascha> $ juju ssh 0
<sdhd-sascha> ERROR opening environment: Get https://10.0.0.8:8443/1.0: Unable to connect to: 10.0.0.8:8443
<rick_h_> sdhd-sascha:  what version of Juju?
<sdhd-sascha> rick_h_: that's because i delete `10.0.0.8` before from the cluster...
<sdhd-sascha>  wait...
<sdhd-sascha> 3.22 (13840) on every server
<timClicks> I believe the service is called juju-db
<rick_h_> sdhd-sascha:  what is the Juju version though? what does the version in juju status show you?
<rick_h_> timClicks:  +1
<sdhd-sascha> sorry
<sdhd-sascha> ```
<sdhd-sascha> juju               2.7.4                       10906  latest/stable     canonicalâ  classic
<sdhd-sascha> lxd                3.22                        13840  latest/stable     canonicalâ  -
<sdhd-sascha> ```
<rick_h_> sdhd-sascha:  is your contrller up and running? can you do juju status successfully?
<sdhd-sascha> rick_h_: yes
<sdhd-sascha> ```
<sdhd-sascha> | juju-b1a552-0  | RUNNING | 10.0.2.92 (eth0)       |      | CONTAINER | 0         | mars     |
<sdhd-sascha> +----------------+---------+------------------------+------+-----------+-----------+----------+
<sdhd-sascha> | juju-b1a552-1  | RUNNING | 10.0.2.94 (eth0)       |      | CONTAINER | 0         | merkur   |
<sdhd-sascha> +----------------+---------+------------------------+------+-----------+-----------+----------+
<sdhd-sascha> | juju-b1a552-2  | RUNNING | 10.0.2.93 (eth0)       |      | CONTAINER | 0         | mars     |
<sdhd-sascha> ```
<rick_h_> sdhd-sascha:  is it something you can pastebin? https://paste.ubuntu.com/
<rick_h_> sdhd-sascha:  so that's lxc list which is showing you the containers you have running
<rick_h_> looks like you've got a 3 machine HA cluster going?
<sdhd-sascha> rick_h_: what exactly, should i print ?
<sdhd-sascha> rick_h_: yes, 3 HA
<rick_h_> sdhd-sascha:  `juju status`
<sdhd-sascha> https://www.irccloud.com/pastebin/qGvYj9Bz/
<rick_h_> sdhd-sascha:  hmm, ok can you try that ssh command again with --debug `juju ssh 0 --debug`?
<sdhd-sascha> https://www.irccloud.com/pastebin/ahwq5OaF/
<rick_h_> sdhd-sascha:  does that make any sense to you? It's confusing to me as the lxd machines are all 10.0.2.xx and not sure what a tomcat port has to do with anything?
<sdhd-sascha> rick_h_: today, your colleage, say's i should send a bugreport about the failure with the deleted lxd-server.
<sdhd-sascha> I will do tomorrow.
<sdhd-sascha> But now, i only want to access the mongodb. But didn't have a password...
<rick_h_> sdhd-sascha:  ok, the services are run via /etc/systemd/system/juju-db.service and /etc/systemd/system/jujud-machine-0.service
<sdhd-sascha> rick_h_: I bootstrapped the lxd-cluster from 10.0.0.8. Then i bootstrapped juju from the same ip. Then i figured out, that the machine has not enough RAM. So i deleted it from the cluster... ;-) ...
<sdhd-sascha> rick_h_: Ah, thank you... i searched for `mongo`
<sdhd-sascha> :-)
<rick_h_> sdhd-sascha:  the db password will be on the machine in the /var/lib/juju/agents/machine-0/agent.conf
<rick_h_> sdhd-sascha:  that post is the right one: https://discourse.jujucharms.com/t/login-into-mongodb/309 but since your juju ssh is wonky you'll have to pull the script apart manually
<rick_h_> sdhd-sascha:  you can skip the juju ssh to the machine since it's lxd and just lxd exec bash on the right instance
<sdhd-sascha> rick_h_: my agent.conf's didn't have a value `statepassword` like i said before
<rick_h_> juju-b1a552-0 from the status output
<rick_h_> sdhd-sascha:  it should be there.
<sdhd-sascha> rick_h_: ah, thank you :-) i found it ;-)
<rick_h_> sdhd-sascha:  coolio
<sdhd-sascha> :-)
<sdhd-sascha> login works. great +1
#juju 2020-03-19
<wallyworld> kelvinliu: here's a pr for update-k8s https://github.com/juju/juju/pull/11334 for whenevr, no rush
<kelvinliu> yep,
<kelvinliu> wallyworld: https://github.com/juju/juju/pull/11335 this PR is for CR lifecyle, +1 plz. im going to grab some food,  I will review urs once im back
<wallyworld> np
<manadart> stickupkid: I had a quick lash at merging 2.7 to develop last night, but your series patch has significant conflicts with edge deploy changes. Want to do that?
<stickupkid> sure
<stickupkid> manadart, https://imgflip.com/i/3t6rlg
<stickupkid> manadart, done https://github.com/juju/juju/pull/11336
<stickupkid> manadart, seems we have an error in the azure testing...
<stickupkid> manadart, someones added 1.13 code
<stickupkid> manadart, https://github.com/SimonRichardson/juju/commit/4e78d5f94d707998ce29ba5a8e32367499270493
<stickupkid> sorry it's 1.12 code
<Kris20> Hello, does anyone has a good email for juju support? Or anyone aware how a juju image could be provided to them?
<Kris20> I meant, Ubuntu image with installed Juju*
<Kris20> Sorry if asking silly questions, I'm new in that.
<stickupkid> manadart, can you verify this https://github.com/juju/juju/pull/11336/commits/82394ec7e834b087f8fd0c17882a7fab0c038d1f
<manadart> stickupkid: Verified.
<stickupkid> thanks
<stickupkid> manadart, I'm going to verify everything greens first before attempting to merge it
<stickupkid> manadart, you seen this error
<stickupkid> https://paste.ubuntu.com/p/xXBF3wDbkC/
<stickupkid> i tested directly and it's downloadable
<stickupkid> seems weird
<manadart> stickupkid: Now and then. Just CI network conniptions.
<stickupkid> i've never seen conniptions used in the wild - ha
<stickupkid> also https://github.com/juju/juju/pull/11337 <- goose changes
<achilleasa_> manadart: turns out that extracting the common logic from the relation (lowercased one) wont work... if I do that then any relation hook tools called inside a relation-created hook will not work as they look up relation data in the wrong place
<achilleasa_> so instead, I will add the stuff I need to the relations struct and change the c-tor to return 2 resolvers (created, all-other) to force the use of the same Relations instance in both
<achilleasa_> updating the tests will be _loads of fun_ after this change...
<rick_h_> morning party folks
<hml> small review please: https://github.com/juju/juju/pull/11338
<sdhd-sascha> Hey, is juju-2.8-beta1 somehow usable ? I ask, because i want to setup a manual cloud. And my machines are `ubuntu 20.04 classic`.
<sdhd-sascha> Somedays ago, it didn't work. But would try it again...
<rick_h_> sdhd-sascha:  classic?
<rick_h_> sdhd-sascha:  so juju 2.7.4 has focal support in it. You have to set daily image streams and use the --force flag.
<rick_h_> sdhd-sascha:  we're getting ready to put 2.7.5 into candidate which will remove the --force bits
<sdhd-sascha> rick_h_: Hi :-)
<sdhd-sascha> I'm mean not `ubuntu core` ... The guys from snapd call this `classic`
<rick_h_> sdhd-sascha:  ok, just checking
<sdhd-sascha> `daily image streams` .. how ?
<rick_h_> sdhd-sascha:  sec, let me look up the discourse page with the step by step
<sdhd-sascha> no problem. keep calm ;-)
<sdhd-sascha> Yesterday with update of mongo inside the `cloud` collections works fine :-) (But today i killed my cluster accidently :-D ... )
<rick_h_> sdhd-sascha:  check out https://discourse.jujucharms.com/t/juju-2-7-2-release-notes/2667
<sdhd-sascha> rick_h_: thank you :-)
<rick_h_> np
<wallyworld> kelvinliu: got time for a HO? in standup
<kelvinliu> wallyworld:  yep
<wallyworld> thumper: can you jump into standup?
#juju 2020-03-20
<kelvinliu> wallyworld: I just had a test, juju bootstrap to microk8s, we have rbac resource created. and https://pastebin.ubuntu.com/p/5m5Bj9Ytg2/
<wallyworld> kelvinliu: did you clear out your local clouds.yaml first?
<kelvinliu> but the rbac resources won't be deleted if the ctrl is killed/destroyed. there is card for this https://trello.com/c/KrqgRMeH/951-remove-mcirok8s-credential-in-destroy-kill-controller
<wallyworld> and credentials.yaml
<kelvinliu> yes, I did
<wallyworld> and your cred.yaml is missing the rbac-id?
<kelvinliu> the rbac-id for microk8s is `microk8s`
<kelvinliu> it's not a uid
<kelvinliu> because we have only one buildin k8s cloud
<wallyworld> and yet when i run cat microk8s.config | juju add-k8s foo it does put in a rbac-id
<wallyworld> so it's not consistent
<wallyworld> and there's also now microk8s clusters
<kelvinliu> if it's add-k8s, then u can have many microk8s cloud but it's not a buildin cloud anymore
<wallyworld> so you could access externally like any other k8s cloud
<wallyworld> that is true
<kelvinliu> the `juju-credential-microk8s` rbac resources are not cleared after controller is destroyed because the credential is not deleted which is a todo as the card said
<thumper> wallyworld: sorry, was out having lunch
<thumper> quick k8s questions with units
<wallyworld> no worries, just wanted to touch base on a charm issue
<thumper> do we call hostDestroyOps?
<wallyworld> in standup now for reals
<thumper> sorry
<thumper> destroyHostOps
<wallyworld> i'll check after standup
<thumper> ack
<thumper> wallyworld: actually when you're done I should talk with you about k8s models and the all watcher
<thumper> I'm in the code now, and I'm worried
<thumper> actually less worried now
 * thumper keeps reading
<babbageclunk> phew
<tlm[m]> ?
<wallyworld> thumper: wanna jump in standup?
<thumper> yeah
<thumper> omw
<thumper> ugh... some of our tests are awful
<thumper> who would have thought that a simple denormalisation would trigger so many bad tests
<thumper> ok... I wouldn't have thought that
<thumper> but in many places, we were creating subordinates for units that weren't assigned to machines
<thumper> which is kinda bad
<thumper> some of our tests are incredibly contrived situations that can never happen in the normal flow of real deployments
<thumper> like opening a port on a unit that isn't assigned to a machine
<thumper> that'll never happen
<flxfoo> hi all
<flxfoo> is there any charm for a redis cluster running for bionic?
<stickupkid> flxfoo, https://jaas.ai/u/omnivector/redis-cluster/bundle/1
<flxfoo> thanks stickupkid
<manadart> stickupkid: Can you look at https://github.com/juju/juju/pull/11330 ? I still need to add some tests, but I expect that to be the only change from here.
<stickupkid> manadart, yeah, I'll get to it in a bit, fighting microstack atm
<stickupkid> manadart, did you ever get this bootstrapping to microstack?
<stickupkid> ERROR failed to bootstrap model: cannot start bootstrap instance in availability zone "nova": cannot run instance:  with fault "No valid host was found. "
<manadart> stickupkid: Nope.
<manadart> stickupkid: You import the image, add the stream and so on?
<stickupkid> i'll show you one sec
<hml> stickupkid:  thatâs a typical canât create the host error msg from o7k
<stickupkid> manadart, https://github.com/juju/juju/pull/11253/files#diff-a86032d7aeab77815b81f23cdfb46921
<hml> stickupkid: youâd have to look at the nova logs to find the cause
<stickupkid> yay
<hml> stickupkid:  not enough memory or some random failure
<hml> stickupkid: itâs the catch all error message for shit didnât workâ¦. :-D
<stickupkid> \o/
<stickupkid> hml, seems legit actually https://paste.ubuntu.com/p/rGBWN5w4B7/
<hml> stickupkid: usually thereâs a better msg burried in the logs.  there are many nova logs, trying to remember which one specifically.
<hml> i donât see the cause of the âNoValidHostâ in the pastebin
<hml> stickupkid: maybe this is the ptr to the issue ânova/scheduler/manager.py", line 156, in select_destinationsâ
<hml> they left the reason empty just to be helpful.  ;-D
<hml> stickupkid: i think i found my permission issueâ¦ the ugprade wsa handled by the machine agentâ¦ and the errors where in the unit agentâ¦.
<hml> the machine agent handled things okayâ¦
<hml> now the question is now NOT to run in the unit agent.
<stickupkid> damn it
<stickupkid> it's multpass :(
<stickupkid> i <- o/
<stickupkid> I'm giving it 12G of ram, how much more does it want! what does it think it is, mongo!
<rick_h_> lol, we have a joke that our dog can count "1..2...all of them". So when she sees bones she says she has "1..2...all of them"
<rick_h_> sounds like multi-pass wants "1...2..all of them" memory :)
<stickupkid> my tests pass locally though, so that's a bonus
<zeestrat> We usually say that it's like a garage. It's either new or full ;)
<hml> achilleasa_:  responded to a few comments.  have a read and let me know if we should chat?
<achilleasa_> hml: thanks for the comments.
<hml> achilleasa:  HO?
<achilleasa> sure
<stickupkid> manadart, CR done
<manadart> stickupkid: Thanks.
<stickupkid> hml, it's not memory that's an issue
<stickupkid> Memory usage:   5.4G out of 23.5G
<hml> stickupkid:  can you make an instane with a bigger flavor?
<hml> stickupkid: processor?
<stickupkid> given it 12 cores
<stickupkid> OT: I do like this cattle setup - I can totally just chuck everything out and start again
<stickupkid> hml, DISK SPACE
<stickupkid> not getting that time back - but it's almost working now
<hml> stickupkid: that was my next guess, some sort or resource issue usually
<stickupkid> that's probably the worst error message I've ever seen
<hml> stickupkid: ha!  I like âErrorâ too
<hml> stickupkid: had to track that down in juju.  or was it âError not foundâ. and that was it printed
<hml> stickupkid: but agreed NoValidHost is a pita
<stickupkid> proof it works https://paste.ubuntu.com/p/Rxb8hxqKYk/
<stickupkid> juju needs to shut the f'up about image metadata though, that's just not right
<stickupkid> I'm so happy
<stickupkid> right, how do I shut juju - mission 2
<stickupkid> hml, up there with this error message https://content.spiceworksstatic.com/service.community/p/post_images/0000291424/5a6917f0/attached_image/task_failed_successfully.png
<hml> stickupkid: alol
<stickupkid> did anyone test the new python static analysis - i'm getting loads of syntax errors
<flxfoo> stickupkid: sorry to disturb, you mentionned using the redis-cluster charm... when I tried to use it (within cakephp), I gave the ip of the leader, but I have an exception saying that another member "MOVED"... Do I have to check a special configuration for the cluster?
<flxfoo> or cakephp
<stickupkid> flxfoo, you need to tell cakephp to follow the redirects, I'm unsure how you'd do that without digging
<achilleasa> rick_h_: I got the relation-created bits working with my refactoring changes. Still have to type the PR description and QA steps (quite a few scenarios) but if you want to try it out it's here: https://github.com/juju/juju/pull/11341 (you can try juju deploy ./testcharms/charm-repo/quantal/all-hooks -n 2)
<rick_h_> achilleasa:  awesome ty
<flxfoo> stickupkid: ok, so that will not be out of the box then... thanks
#juju 2020-03-22
<flxfoo> stickupkid: ok, so that will not be out of the box then... thanks
<hpidcock> wallyworld: I'm starting to think https://bugs.launchpad.net/juju/+bug/1867783 and https://bugs.launchpad.net/juju/+bug/1867785 are related
<mup> Bug #1867783: Charm deployment hangs with 2.8 <k8s> <juju:In Progress by hpidcock> <https://launchpad.net/bugs/1867783>
<mup> Bug #1867785: Unauthorized error in debug-log with 2.8 <k8s> <juju:Triaged by hpidcock> <https://launchpad.net/bugs/1867785>
<hpidcock> going to work on the deployment hang first and see how I go
<wallyworld> hpidcock: i need to look at that today a bit more, chat soon, talking to babbageclunk atm
<hpidcock> wallyworld: we can chat after standup
<wallyworld> ok
