#juju 2012-04-09
<niemeyer> Hello jujuers
<smoser> hazmat, i'm here now. it looks to me from your pastebin at http://pastebin.ubuntu.com/918245/ that cloud-init ran the 'start', found the DS, and used it.
<smoser> but the cloud-config upstart job did not run
<smoser> which is:
<smoser>  start on (filesystem and started rsyslog)
<SpamapS> hazmat: re your question about jitsu.. I like the idea of having a 'jitsu' command that is basically just the juju wrapper that already exists.. so it calls juju when jitsu doesn't have a command you pass it.
<SpamapS> maybe niemeyer should contribute to juju-jitsu, then it would be Brazilian Juju-Jitsu ;)
<niemeyer> :)
<imbrandon> jcastro: btw http://brandonholtsclaw.com/sites/default/files/domain-list.pdf I'm thinking cloudexamples.net will be a good place for "sure, just run juju deploy uber wordpress" etc
<imbrandon> whatduya thunk?
<gary_poster> hazmat or To Whom It May Concern: my escapades trying to get Juju working on Canonical OpenStack continue.  We are now worse off than when I started, but better off than last week.  Current status is https://pastebin.canonical.com/63978/ if anyone has any input or suggestions; the text is from the pertinent RT, https://rt.admin.canonical.com//Ticket/Display.html?id=50773
 * hazmat joins in the adventure
<imbrandon> hrm speaking of OpenStack, marcoceppi did you ever check to see about the s3 compat for hp swift containers ?
<marcoceppi> imbrandon: I haven't checked since I first tried
<marcoceppi> too much other stuff going on
<imbrandon> kk, np , just curious
<imbrandon> i may dig in and learn the ways of the jedi
<imbrandon> or try
<hazmat> gary_poster, what version of juju are you using
<hazmat> fwereade_, did you try juju with a private cloud with constraints?.. i'm getting errors on ec2-zone
<gary_poster> hazmat, 0.5+bzr504-1juju4~precise1
 * gary_poster gets lunch.  biab
<hazmat> gary_poster, thanks
<bkerensa> jcastro: You making juju stickers I hear?
<jcastro> no
<jcastro> those are charm school stickers
<bkerensa> jcastro: ahh
<bkerensa> jcastro: kees's sbuild juju charm is pretty cool we got to use it when he wrote it
<hazmat> gary_poster, since the private cloud is behind a firewall, you'll need to login to some machine where the client has access to the cluster.
<hazmat> gary_poster, attaching the public ip address no longer changes the output of public ip address or public dns address
<gary_poster> hazmat, that was not the case before, and preventing that restriction (the firewall) is the point of the RT.  The .canonistack dns names should be something we can configure once in .ssh/config and then juju wuld just work
<gary_poster> would
<hazmat> gary_poster, yes
<hazmat> it is different now, previously i could just attach a public ip to the bootstrap and work, but that is broken.. if the addresses/hostnames returned we're routable in some fashion that would also fix the problem
<hazmat> s/were
<hazmat> where
<hazmat> ;-)
<gary_poster> hazmat, :-) cool.  So you suspect that if I get the RT fixed (the second bullet point in my summary) then possibly the deployment would also work again (third bullet point)?
<gary_poster> I was only pessimistic about this because the failure for my third bullet point was seemingly in canonistack itself
<gary_poster> so it was not a connection failure but...something else, like the security problem you found last time
<hazmat> gary_poster, hmm
<hazmat> gary_poster, yeah.. just reviewd the RT in full
<hazmat> i'd like to think persistence pays off..
<gary_poster> :-)
<hazmat> what i find a little irksome is that previously attaching public ips would change the reported public address
<hazmat> but that appears to no longer be the case
<hazmat> and is yet another minor incompatibility with ec2 behavior
<gary_poster> yeah
<hazmat> i guess i should submit my patch for euca2ools nova compatibility
<gary_poster> +1, if that is waiting in the wings, yeah :-)
<hazmat> looks like its already done.. but un-acted.. https://bugs.launchpad.net/euca2ools/+bug/891889
<_mup_> Bug #891889: euca-describe-addresses fails with missing attribute on Address object when using boto < 2.1 <euca2ools:New> < https://launchpad.net/bugs/891889 >
<hazmat> gary_poster, i don't really understand elmo's comment on that RT, what is juju supposed to do differently here when it doesn't have access to any accessible endpoint
<hazmat> ie. re fix juju
<hazmat> gary_poster, so my recommends in a nutshell would be setting up on the bootstrap instance after logging into it.. or getting the chinstrap gateway setup when they attach a domain suffix to the hostname.
<gary_poster> hazmat, re fix juju, I think my reply to him addressed his points.  There's not something for juju to fix, and there was a reasonable way to accommodate juju (*.canonistack resolving on chinstrap).  I will do as you suggest (set up on the bootstrap after logging into it) if I need to do canonistack, but my main goal is to push through the domain suffix thing, because I think that's going to be a win for every person who want
<gary_poster> s to use juju on canonistack..and we're hoping that's a decent number of people.
<gary_poster> thanks
<hazmat> gary_poster, thanks for keeping at it.. i totally agree this process should be easier for folks..  its probably worth a wiki page on how to setup locally that we can direct folks to
<gary_poster> hazmat, yeah, that's my plan once it works.
<elmo> gary_poster: what are you talking about?
<_mup_> juju/trunk r518 committed by kapil.thangavelu@canonical.com
<_mup_> [trivial] fix for private clouds using ec2 api, ec2-zone and instance-type are still valid constraints
<gary_poster> elmo, RT 50773, your comment https://rt.admin.canonical.com/Ticket/Display.html?id=50773#txn-838466
<gary_poster> or actually https://rt.admin.canonical.com/Ticket/Display.html?id=50773#txn-838466
<gary_poster> eh same one, sorry, on call
<_mup_> Bug #977481 was filed: Support private/public cloud spanning <juju:New> < https://launchpad.net/bugs/977481 >
<_mup_> Bug #977528 was filed: Relation membership for child relations should be cached before hook execution <juju:New> < https://launchpad.net/bugs/977528 >
<_mup_> juju/relation-hook-commands-spec r9 committed by jim.baker@canonical.com
<_mup_> Updated per review comments
<marcoceppi> Can charms have a "." in the name?
<bkerensa> marcoceppi: surely jcastro knows
<bkerensa> :D
<jcastro> We haven't said one way or the other
<marcoceppi> I'll just omit the period then
<jcastro> m_3: hey so, we have a charm on github: https://bugs.launchpad.net/charms/+bug/977552
<_mup_> Bug #977552: Charm Needed: Terraria server <Juju Charms Collection:Confirmed> < https://launchpad.net/bugs/977552 >
<jcastro> ideas for integrating/mirroring/whatever?
<marcoceppi> jcastro: you can setup a mirror repo on LP
<marcoceppi> it pull from git every X mins
<marcoceppi> I think imbrandon did that for the drupal charm
<jcastro> I don't think asking people to mirror their stuff in lp will scale
<m_3> jcastro: np, with only a few, I can pull manually
<m_3> jcastro: still preliminary atm... just treat it as a normal charm submission... have him tag it with new-charm when it's ready for review
<m_3> jcastro: the fact that it's on github doesn't matter until after it's accepted into the charm store... then we'll figure out MPs -vs- PRs when we cross that bridge
#juju 2012-04-10
<imbrandon> m_3: its ready for the charm store and been reviewed :)
<imbrandon> i have it manually mirror both places atm, and then i setup a juju-drupal project so it would pull every few minutes, but it pulls to a diffrent branch
<imbrandon> soo
<yolanda> hi, good afternoon
<yolanda> one question, how can i debug a db-relation-joined hook? it doesn't give any error, but the code in the hook doesn't seem to be executing
<sc-rm> Yolanda: Which charm are you trying to use?
<yolanda> sc-rm, i'm building a new one
<yolanda> i'm adding a relationship to postgresql
<yolanda> and i have a hook db-relation-joined
<sc-rm> yolanda: Which language do you code your charm im?
<yolanda> sc-rm, its' just bash
<sc-rm> youlanda: okay, then I would just do some echo in the places I suspect to not get hit
<yolanda> sc-rm, but how can i reach to that relationship in debug?
<sc-rm> youlanda: I guess you do something like juju add-relation postgresql you-charm?
<yolanda> sc-rm, hes, juju add-relation openerp postgresql:db
<yolanda> the relationship seems to be creating, doesn't give any error, but there is some code to change a config file and isn't triggered
<imbrandon> yolanda: anything in juju debug-log
<yolanda> juju add-relation openerp postgresql:db -e local
<yolanda> 2012-04-10 15:20:06,162 INFO Added pgsql relation to all service units.
<yolanda> 2012-04-10 15:20:06,163 INFO 'add_relation' command finished successfully
<yolanda> nothing in logs
<imbrandon> fun
<sc-rm> youlanda: have you looked at https://juju.ubuntu.com/docs/write-charm.html ?
<imbrandon> sc-rm: i'm fairly sure this is about yolanda's what the 3rd or 4th one :)
<yolanda> i've read it
<yolanda> i have successfully create the install, start and stop
<yolanda> but the db-relation-joined gives me problems
 * SpamapS stretches and yawns
<imbrandon> moins SpamapS
<SpamapS> yolanda: have you used 'juju debug-hooks' yet ?
<SpamapS> yolanda: I think its what you're looking for
<yolanda> SpamapS, yes, i used it, but how can i retry a relationship? i never get into the db-relation-joined hook
<yolanda> i remove the relationship, i add it again, but i'm not placed into the hook
<SpamapS> yolanda: if the state is just 'up', you need to remove/add it again. If it is in error, you can use 'juju resolved --retry unit/X db'
<SpamapS> yolanda: are you sure you're not still *in* the hook?
<yolanda> the status of the relationship is that:
<yolanda>     relations:
<yolanda>       db:
<yolanda>       - openerp
<yolanda> and that for openerp:
<yolanda>     relations:
<yolanda>       db:
<yolanda>       - postgresql
<yolanda> is that ok?=
<SpamapS> yolanda: can you install 'pastebinit' and just pipe the whole output of 'juju status' to it
<SpamapS> ?
<SpamapS> bcsaller1: good morning?
<yolanda>  ok
<yolanda> http://paste.ubuntu.com/923267/
<SpamapS> yolanda: thanks reading
<yolanda> thx
<SpamapS> yolanda: so yeah I would expect remove/add to throw you back into db-relation-*
<SpamapS> yolanda: perhaps you need to exit debug-hooks and go back in ?
<yolanda> SpamapsS, so i just do a debug-hooks openerp/0
<yolanda> and then remove add relation?
<yolanda> it doesn't place me in the hook, i just remain in the initial screen
<SpamapS> yolanda: it should definitely place you almost immediately in the joined hook
<yolanda> doesn't do it :(
<SpamapS> yolanda: can you run debug-log in another window and confirm that joined is being executed?
<yolanda> nothing in juju debug-log
<yolanda> SpamapS, so something is wrong with my juju charm? seems that nothing is triggered
<SpamapS> yolanda: can you pastebin metadata.yaml ?
<yolanda> sure
<yolanda> http://paste.ubuntu.com/923287/
<SpamapS> yolanda: I suspect there is something more sinister going on. Can you try 'juju resolved --retry openerp/0 db'
<SpamapS> perhaps the new status output is not as obvious about errors
<yolanda> The relation 'db' has no unit state for 'openerp/0'
<yolanda> 2012-04-10 15:42:32,817 ERROR The relation 'db' has no unit state for 'openerp/0'
<SpamapS> yolanda: is this right after you added it?
<SpamapS> yolanda: can you run 'dpkg -l juju' on your machine, and on openerp/0 ? are they the same version?
<yolanda> SpamapS, yes, just after adding it
<yolanda> let me check the versions
<yolanda> mm, not same version
<yolanda> local is bzr504
<yolanda> openerp/0 is bzr518
<yolanda> but i'm using local containers in juju, why are they different?
<yolanda> i'm using ppa as juju-origin, perhaps i need to use repo?
<SpamapS> yolanda: well technically there should be no incompatibility between those versions
<SpamapS> but in reality, I've seen a lot of subtle breakage with stuff like this
<SpamapS> yolanda: just upgrade your local machine. 518 just landed yesterday.
<SpamapS> yolanda: if that fixes it.. please let me know. We really have to stop breaking things in backward incompatible ways.
<yolanda> let me see
<yolanda> do i have to regenerate all the environment?
<SpamapS> yolanda: hopefully no
<SpamapS> yolanda: you might want to destroy the existing services
<SpamapS> yolanda: but I'd try just with a new client binary
<yolanda> SpamapS, it gave some error, i'm destroying environment and creating new
<SpamapS> yolanda: could you pastebin the error?
<SpamapS> Thats actually quite critical since we plan to land one more version in precise .. I think
<yolanda> too late...
<yolanda> i just destroyed it
<yolanda> i'm running it again, let me check if the error shows now
<marcoceppi> I wonder would a "good" feature be to bootstrap a remote environment locally? So in the environments.yaml EC2 stanza say "local-bootstrap: true". Juju would spin up an LXC container instead of doing so in the intended cloud
<marcoceppi> I know you loose portability, but you could save money that way
<SpamapS> marcoceppi: you can save money with 'placement: local'
<SpamapS> marcoceppi: just remember to remove it after the first deploy. ;)
<marcoceppi> SpamapS: that deploys to the bootstrap, correct?
<SpamapS> marcoceppi: though I have thought a lot about how to do de-centralized juju.. and one way I was wondering about was to run the zookeeper locally and have juju agents just connect to 127.0.0.1:xxxx and have juju clients ssh in and "wake up" the agents with the port forwarded. Feels very hacky.
<SpamapS> marcoceppi: correct
<_mup_> juju/ns-from-env r479 committed by kapil.thangavelu@canonical.com
<_mup_> merge trunk
<marcoceppi> SpamapS: Right, I've been pondering different de-centralized ways to manage juju this morning, at least at a high level, and I couldn't really fathom another way than what's running now
<SpamapS> marcoceppi: I think ultimately zookeeper should be ditched for a map of the nodes kept somewhere (S3 for instance).. and agents should just be responsible for their own data, and use 0mq to talk to eachother and feed their status back to object storage per-unit. I'm not convinced anymore that zookeeper is needed, though it *does* make things simpler.
<SpamapS> marcoceppi: It makes me nervous how much vital information is stored in ZK.
<SpamapS> anyway, time to wake up the family
<marcoceppi> SpamapS: so, an alternative would be to just maintain a map in the bucket then all the relation based data that may or may not be sensative is on each node within the juju agent?
<SpamapS> marcoceppi: right
<marcoceppi> interesting
<SpamapS> basically juju client would just much with a tree of intent (much like it does now) but instead of having ZK there with all of the other resulting data (like relation-set produced data), the agents would just keep that to themselves and message it back and forth.
<SpamapS> s/much/muck/
<SpamapS> that way there's no one server that you compromise to pwn the entire environment.
 * SpamapS goes upstairs to start shaking his toddler awake
<marcoceppi> Is there any way to tell which unit is the "head" in a peer relation?
<hazmat> smoser, thanks for the clue yesterday re cloud-init, it appears that rsyslog fails to start in the lxc cloud-image
<hazmat> wonders what the heck /usr/bin/whoopsie is
<SpamapS> hazmat: just don't step on it
<jcastro> hey SpamapS
<jcastro> let's talk store for precise now that contest is nearly done
<SpamapS> yeah I started working on testing all the charms last week
<jcastro> SpamapS: anything sticking out as a problem? ie. do you think this will be a big deal or just flipping the switch?
<SpamapS> jcastro: no, initially most work fine. :)
<SpamapS> jcastro: we should have a period where we invite ~charmers to fix them all, and I need to ask the launchpad guys what will happen when there is already a branch present.. but i think we'll have this wrapped up by next week.
<jcastro> ok
<smoser> hazmat, why no syslog ?
<smoser> do you know ?
<smoser> as i might have been about to ask SpamapS on a similar question
<smoser> oh.
<smoser> lxc
<hazmat> smoser, haven't really debugged it yet.. it fails to start
<smoser> /dev/log
<hazmat> smoser, yup
<hazmat> probably
<smoser> lxc container does not probably have access to that device node
<smoser> does that make sense to you? would that be able to kick you in the right direction to fix it ?
<smoser> or you want more direction
<smoser> s/fix/test change/
<hazmat> smoser, doesn't exist in the container
<smoser> right. because it can't get created
<smoser> because lxc wont let it.
<smoser> er... app armour or what not.
<avoine> hazmat have you found a bug about rsyslog on lxc?
<avoine> I saw that because it prevent cloud-setup to be fire
<hazmat> avoine, well it doesn't start in particular on the cloud-image which causes issues for cloud-init's config which juju relies on.
<hazmat> smoser, at this point i'm just going to punt till after precise, it was a big change to even try at this point, even though the code delta is small
<hazmat> smoser, as far as hints.. though i'm all ears
<smoser> oh. wait... maybe we can fix this via cloud-init data.
<smoser> wait.
<smoser> no
<smoser> rsyslog is going to use /dev/log
<smoser> we need access to that in the container
<hazmat> smoser, looks like just killing the app armor profile for rsyslog gets it to start
<hazmat> and then cloud-init config is happy
<smoser> hm..
<smoser> iwould have thought youneeded access specifically to mknod from one of the lxc configs.
<smoser> but... ok.
<smoser> my suspicion is that /dev/log is not containerized
<smoser> hallyn or stgraber can probably provide insight on what to do here.
<hazmat> smoser, not sure, i don't see the messages from the container in the host..
<smoser> oh. wait.
<smoser> /dev/log is just a socket.
<smoser> and rsyslog sets it up
<smoser> so, yeah, containerized.
<smoser> i was thinig it was a dev node.
<hazmat> smoser, so its just the app aromor pollicy
<hazmat> oh.. double typo, extra points
<bac> when i try to use 'juju bootstrap' with --constraints specifying more than one constraint i get errors:  http://paste.ubuntu.com/923438/
<bac> i'm following the example shown at https://juju.ubuntu.com/DeprecatedEnvironmentSettings
<avoine> hazmat: smoser: could you tell me if you fill a bug about the rsyslog issue? I would like to follow it.
<smoser> hazmat, ^ can you file one? open it against lxc.
<smoser> i'd do it, but you ahve more context.
<hazmat> smoser, avoine sure
<avoine> thank you
<hazmat> avoine, smoser its bug 978147
<_mup_> Bug #978147: rsyslogd fails to start in cloud template  <lxc (Ubuntu):Confirmed> < https://launchpad.net/bugs/978147 >
<avoine> ok thanks again
<imbrandon> SpamapS: man , dunno if your a rock fan or not , but I just bought the new Halestorm album, if you are even a little, dont hesiate get it now! dude most sophmore albums suck, or at least dont stack up to the first, this one blows the first out of the water
<SpamapS> imbrandon: I'm disconnected from all new music unfortunately. I still just rock out to Pantera's Vulgar Display of Power when I need something with kick
<imbrandon> yea definaltly along the same lines, dude i broke my nose at that tour
<imbrandon> on*
<imbrandon> in the moshpit
<SpamapS> haha
<SpamapS> Its my hockey music
<imbrandon> :)
<SpamapS> ONETWOTHREEFOUR!!!
<imbrandon> hells yea there is some reason i knew i liked you hahahaha j/k
<SpamapS> Honestly lately I just listen to Billy Joel. I need to chill more than I need to get amped up.
<imbrandon> :)
<imbrandon> they got a little chill on the album, like "heres to us" more like an 80's powerbalid
<imbrandon> lemme see if i can find it on amazon, i think they let you preview it
<imbrandon> http://www.amazon.com/Heres-To-Us-Explicit/dp/B0070R94SK
<imbrandon> yea there is a preview
<imbrandon> its the slower side of the album
<imbrandon> this is the one that made me buy the album instantly http://www.amazon.com/American-Boys/dp/B007ONQC4Y
<imbrandon> and of course i have the first album too and seen em in concert 2 times now at Rockfest KC, bigest one day music fest in america ( 55k people, one day, 12 hours, 2 stages, hell yea )
<marcoceppi> err, I think you guys broke compatibility between juju versions 0.5+bzr504-1juju4~oneiric1 and 0.5+bzr518-1juju5~oneiric1
<marcoceppi> In this case, should I just run a apt-get upgrade on the bootstrap? juju status is giving a weird error ATM and I think it's because the juju versions mismatch
<SpamapS> marcoceppi: certainly worth a try. Can you maybe document the errors so we can try to avoid such madness in the future?
 * SpamapS really really wants to implementat version stability
<marcoceppi> SpamapS: I understand I'm in the ppa version, so I expect some breakage. What would be best for you guys?
<marcoceppi> as far as documenting errors and giving enough details
<_mup_> juju/must-specify-relation-name r516 committed by jim.baker@canonical.com
<_mup_> Client side check
<m_3> SpamapS: hey... so blacklisting charms from testing
<m_3> I blacklist the openstack ones because they're already tested elsewhere
<m_3> but take buildbot-slave for example...
<m_3> it requires config to even install
<marcoceppi> m_3: should charms like that include a default.yaml for config then?
<m_3> I can either blacklist such things or bump them back to the authors and require them to have enough default config to at least "start"
<m_3> I'm tempted to bump them back, but just wanted peeps opinions on that... so we're requiring all charms to at least start when we're reviewing
<m_3> marcoceppi: yeah, we haven't been too strict with it though
<m_3> marcoceppi: really it's enough default values in the config.yaml... not a default.yaml to pass
<m_3> that's more work for the testing framework
<marcoceppi> m_3: I think it's a good idea to have enough defaults in config.yaml then
<m_3> yup
<m_3> it's strange for some charms though... buildbot-slave is a perfect example
<m_3> it's a general tool... defaults don't make sense really, they'd be put there _just_ for our testing purposes
<m_3> I'm cool with that tho
<marcoceppi> SpamapS: http://paste.ubuntu.com/923559/
<marcoceppi> SpamapS: so upgrading the bootstrap didn't work, I guess I'm just going to have to destroy/re-bootstrap?
 * marcoceppi wonders how to best upgrade the omg env without downtime
<m_3> marcoceppi: dang, we've got to stop breaking the api between the cli and existing bootstrapped environments
<m_3> it's almost like you've gotta balance across two environments
<marcoceppi> m_3: I have a question, it might be easy to hold the package on my local machine, but doesn't each deployment/add unit install the current juju package?
<marcoceppi> So holding the package on my machine wouldn't really save me in the future from the units being out of sync with each other
<m_3> I've never gotten a variation of that to work... always ended up having to destroy/reboostrap
<m_3> marcoceppi: are you using origin: ppa?
<m_3> or distro?
<SpamapS> marcoceppi: right, the only way to make it truly repeatable is to stick juju in a bzr branch
<m_3> well, even that's been changing
<SpamapS> marcoceppi: we *must* change that
<m_3> right... gotta freeze juju
<SpamapS> I made a proposal for that
<SpamapS> but its seen basically no response
<marcoceppi> m_3: ppa ATM
<SpamapS> other tham +1's from hazmat here on IRC :)
<marcoceppi> with distro, will I be able to do placement: local and in subsequent upgrades of the distro what's to say it won't break the current deployment like the PPA did?
<SpamapS> marcoceppi: do you still have the affected environment?
<marcoceppi> SpamapS: yes
<marcoceppi> it's the omgubuntu env
<SpamapS> marcoceppi: can you do 'juju -v status' ?
<SpamapS> oh
<SpamapS> awesome!
<SpamapS> <sigh>
<imbrandon> marcoceppi: deploy on new ones and swap the ip is the only other option, but use the elastic ip
<imbrandon> for no downtime
<marcoceppi> SpamapS: it's okay, I can create a new env on the ohso AWS, bootstrap, deploy, add-relation, then move the elastic ip
<marcoceppi> imbrandon: yeah, that's the plan ATM
<SpamapS> I think its time for a stable PPA
<SpamapS> Been thinking about it for a long time
<marcoceppi> SpamapS: <3
<imbrandon> :)
<SpamapS> with a ridiculous number of tests before upload
<marcoceppi> SpamapS: http://paste.ubuntu.com/923597/
<SpamapS> and one of them will need to be "works with the last version"
<m_3> marcoceppi: might be worth really balancing (dns) between the two different environments... instead of just a manual failover
<SpamapS> yeah
<m_3> depends on the data
<imbrandon> SpamapS: can we add works on osx too ? /me ducks
<marcoceppi> m_3 ?
<SpamapS> looks like zookeeper was broken without bumping the version number
<SpamapS> imbrandon: don't duck, stand tall.. and donate a mini? ;)
<imbrandon> i actually probably could
<imbrandon> i got an extra
<m_3> whoohoo!
<imbrandon> its an older one, but it will run 10.7 easy nuff ( intel core 2 duo )
<imbrandon> marcoceppi: m_3 i think that would require dood to be using route 53 dns
<SpamapS> imbrandon: ideally we'd run a VM of Lion and Mountain Lion
<m_3> marcoceppi: two different juju envs... A and B... name balances across elastic ips in both envs.  Then you can make changes to one at a time and pull it out of rotation upon failure
<m_3> it's more $$ tho
<imbrandon> SpamapS: oh thats easy, got any actual vmware products running, osx will legitly load in those
<imbrandon> from 10.7 on
<marcoceppi> m_3: then it's probably a no :) We're looking in to placement: local for mysql+bootstrap now to shave off another small
<m_3> right
<marcoceppi> plus, not sure how the db sync would work
<SpamapS> marcoceppi: replication :)
<imbrandon> yea or rds
<SpamapS> rds would be overkill for their use wouldn't it?
<marcoceppi> now sure how to configure that in the existing charm infrastructure since bootstraps can't talk to each other (yet)
<imbrandon> yup
<imbrandon> awy overkill
<imbrandon> way*
<jcastro> SpamapS: dumb question. I am on precise, and using all "ootb" parts, like distro not ppa, etc. Since I can't deploy "juju deploy subway" since the precise store isn't open, should "juju deploy cs:oneiric/subway" work?
<m_3> marcoceppi: no, the envs wwould be totally isolated
 * imbrandon really needs to fingure out how to bootstrap into hp cloud then i can test all kinda finky configs
<m_3> marcoceppi: oh wait, you're right... replication
<m_3> we'd have to make it a config parameter... we want the environments as isolated as possible to provide availability
<m_3> negronjl's done some stuff with this... it was mongo, not mysql replication, but something similar could be done for mysql
<imbrandon> SpamapS: but yea the mini dshould be able to handle that no problem, it will just be choked on ram, i thnk 4gb is the max for that generation
<imbrandon> SpamapS: but even today vmware has a special clause in their products with apple and can run osx, its how the big hosting companies get away with osx vps's
<imbrandon> i know fusion, esx , vsphere and workstation all work, not sure a bout plain old player
<negronjl> marcoceppi, m_3:  The db sync would have to be a completely different approach ...  they're diff. monsters
<marcoceppi> This is all hypothetically not going to happen for the omgubuntu install, since it's more $$, btw
<m_3> negronjl: 1-way, right... but the basic idea of adding the replica's ip addr as a config param to the master...
<m_3> marcoceppi: understood... sorry, just brainfarting out-loud
<negronjl> m_3: yeah ... that principle could work
<m_3> well sorta-outlound :)
<imbrandon> heh
<marcoceppi> Feel free to keep farting, I just didn't want anyone to get too serious about ZOMG LETS DO THIS
<imbrandon> wow Kubuntu was "picked-up" i think that might be bigger news than the post leads on
<SpamapS> jcastro: you could just set default-series: oneiric too
<jcastro> SpamapS: yeah I was just wondering if I should just stick to all oneiric
<jcastro> but I want this precise hotness for LXC, etc.
<imbrandon> will the charms roll over like packages do ?
<jcastro> or does that not matter, my host is precise right?
<jcastro> imbrandon: yeah clint just needs to finish off some testing and stuff on it
<m_3> we need to do the rollover sometime soon
<imbrandon> cool
<SpamapS> jcastro: we'll fix this by opening the precise charm store soon
<jcastro> SpamapS: yeah it's just hard to demo the contest winners but having to put a "by the way if you're running 12.04 this won't work."
<jcastro> so I announced the winners and then later on I will do the demo commands and stuff for each one
<SpamapS> jcastro: it will work, if they put 'default-series: oneiric'
<SpamapS> jcastro: or 'cs:oneiric/foo'
<SpamapS> jcastro: or you wait a week and we push hard to get it open
<m_3> jcastro: charms are always developed against the current release, not dev
<imbrandon> hey while i got everyone in the same room, m_3 anything special i need to do besides manually mirror for now ? jcastro will you promgstrangualte drupal and where do i start down the path to get my 2 acks myself
<m_3> working against precise should be gravy at this point for charms.. not required
<jcastro> m_3: yeah I just wanted my cake
<SpamapS> m_3: perhaps we should automatically push charms that have not diverged into the dev release when they pass tests on the dev release
<jcastro> SpamapS: hey so not doing a default-series:oneiric worked anyway
<SpamapS> that would be simple logic
<marcoceppi> imbrandon: you still need a final review on the drupal charm before promuglation
<jcastro> SpamapS: but we probably don't want to tell people to do that
<imbrandon> doh, marcoceppi sorry thought that was done
<imbrandon> sorry
<m_3> SpamapS: but then the author's automatically maintaining two releases of the charm?
<SpamapS> m_3: not really. they only have to maintain when the tests start failing on the dev release.
<imbrandon> m_3: they do with packages, i dont see that as a problem
<marcoceppi> imbrandon:  It looks like you fixed the mysql error thing, so I'll spin it up one more time (didn't get the bug reply for somereason) if it runs then it should be ready to go
<m_3> ok
<jcastro1> hello from the subway charm!
<imbrandon> marcoceppi: rockin
<SpamapS> jcastro: sweeeeet
<jcastro> man subway is so sexy, it's my new demo charm
<imbrandon> jcastro: how do you make subway persistant damn it, i had to go back to irssi
<m_3> imbrandon: drupal'll be mirrored automatically in github.com/charms once it's in the store... then pull requests can go off of that mirrored branch
<m_3> jcastro1 awesome!
<jcastro1> imbrandon: run it on another machine you don't turn off
<imbrandon> sure then i close browser and it d/c
<SpamapS> no
<SpamapS> node.js keeps running
<SpamapS> its still online
<jcastro> oh, it's designed for multi user, you need to create an account on it
<jcastro> and just relogin next time
<imbrandon> it was on me, i couldent reconnect
<imbrandon> ohhh
<imbrandon> i never made acct
<SpamapS> yeah
<imbrandon> ok thats my missing part
<jcastro> imbrandon: that's nice because you can have multiple people on one deployment
<imbrandon> m_3: rockin, ok for now i've been manually pushing to both and have each in the others .ignore file
<jcastro> "here's my work IRC." done.
<imbrandon> jcastro: yea i'll likely redploy it on the box i run irssi from , i love it, just condent figure that bit out
<imbrandon> i was thinking it was a one user deal
<jcastro> alice irc in the charm store is the one user one
<imbrandon> yea
<m_3> SpamapS: yeah, I guess that makes sense once we hit some milestone like beta2, then all charms get promoted and tested against the dev release
<m_3> so we're behind :)
<imbrandon> SpamapS: btw i've had "walk" on repeat now for 45 minutes, lol , time to change the tunes
<imbrandon> your fault
<imbrandon> marcoceppi: yea it should have just been a matter of changing it to the mysql not mysqli driver on install
<imbrandon> i got some more goodness in store but testing it on brandonholtsclaw.com now, as it breaks some of the default drupal functionality
<imbrandon> as is
<imbrandon> but takes it from 20 r/s to like 160 r/s ootb
<imbrandon> :)
<SpamapS> mysqli is *awful*
<SpamapS> do not use it
<SpamapS> mysqlnd ftw
<imbrandon> yea i dont and will when 5.4 hits
<imbrandon> 5.3 nd is awefull too
<SpamapS> Tho I don't know if we have it enabled yet on 5.3 in precise :-P
<SpamapS> imbrandon: it still beats mysqli
<imbrandon> yea , just dont, its bad
<imbrandon> i've been using the 5.4 ppa
<imbrandon> apc is broken tho
<imbrandon> thats bad
<shazzner> hey everyone
<imbrandon> ello
<imbrandon> the debian 5.4 ppa that is
<imbrandon> something about it is breaking apc right now, not sure what though, i cant get past segfaults
<imbrandon> thus the charm still using 5.3
<shazzner> a few days ago I talked about promulgating my kusabax charm, but I'm not sure how to do it
<imbrandon> but the 30% increase in perf would be nice
<m_3> aargh!  if I upgrade my cli, I can't talk to deployed envs... if I don't, I can't deploy new ones
<imbrandon> i officially changed that command to promgustrangulate
<shazzner> imbrandon: haha
 * m_3 groan
<imbrandon> catch 22
<marcoceppi> m_3: it's funtastic!
<m_3> I'm gonna snapshot VMs to run the cli for each long-running env
<m_3> and start freezing branches
<imbrandon> i'm thinkign we;re gonna start to see that lots in here
<imbrandon> the growing pains of rolling releases
<m_3> actually, that sounds like a great role for wrap-juju in juju-jitsu... like a virtualenv
<imbrandon> oh :(
<m_3> read environments.yaml to get the branch for each environment and update path accordingly... hmmm
<imbrandon> SpamapS: mmmm juju deploy osx/10.8 :)
<jcastro> SpamapS: m_3: you guys feel like hanging?
<jcastro> I have some questions about the documentation for review in that bug jdstrand filed
<SpamapS> jcastro: in 10
<m_3> jcastro: sure
<m_3> lemme grab some more coffee, then I'll be on
<marcoceppi> Is there a way to get peer-relation information in the install hook?
<SpamapS> marcoceppi: with the new relation-ids thing, yes!
<marcoceppi> fantastic!
<hazmat> SpamapS, marcoceppi there's definite awareness of the foobar that is consistent versions... the bzr branch atm is the realistic option.. going forward, having bootstrap stick a code copy on provider storage,  and deploying from provider storage is probably the right solution, package solutions get messy both for consistent versions and for cross release
<marcoceppi> SpamapS:  can I bother you in the #juju-wordpress room for a few mins. Having a hard time figuring out the best way to proceed with something for omgwp
<marcoceppi> hazmat: that would be beneficial, if the bootstrap deployed the same juju version across all nodes. that way I can just hold the juju package on my desktop until a more "structured" release update schedule is created
<hazmat> imbrandon, that reminds there was a fun php article on hacker news today, in informed hater style, http://me.veekun.com/blog/2012/04/09/php-a-fractal-of-bad-design/
<marcoceppi> Can you have more than one charm deployed to the bootstrap?
<marcoceppi>  with placement: local
<SpamapS> marcoceppi: yes, but don't do it in parallel
<marcoceppi> how so?
<SpamapS> marcoceppi: if you deploy && deploy, apt-get will run in parallel and fail
<marcoceppi> So deploy, wait for started, then deploy again?
<marcoceppi> I'm trying to figure out if it'd be better to just deploy the memcache charm on the bootstrap with MySQL or if I should bake memcache in to the charm and use peer relations to manage the sessions between units
<marcoceppi> since there's a memcache charm, figured that would be easiest
<yolanda> hi
<imbrandon> hazmat: heh /me looks
<yolanda> i found a problem running postgresql charm in juju local container
<yolanda> seems that isn't starting, and when i debug i receive that error: http://paste.ubuntu.com/923751/
<hazmat> yolanda, the pg log should have the actual error
<hazmat> the locale stuff is looks like a warning..
<hazmat> and the hba.conf errors look more interesting
 * hazmat tries it out
<yolanda> http://paste.ubuntu.com/923758/
<hazmat> yolanda, are you deploying that from the charm store or from a charm directory on disk?
<shazzner> mediagoblin looks pretty cool
<shazzner> let me see if I can't whip up a charm for it
<SpamapS> shazzner: I already made one
<SpamapS> shazzner: it uses pip installs heavily tho
<SpamapS> shazzner: got wrapped up in trying to package half of pypi to get it into a PPA
<shazzner> SpamapS: oh nice! :)
<yolanda> hazmat, i grabbed the charms with the charm get tool
<yolanda> it's on a directory on disk, local repository
<imbrandon> hazmat: haha yea i've seen that argument 1000 times, its fundmentally wrong, that parts i see that are good he see's as flaws, i actually think that its a strength of php that you can write bad code, infact one of his complaints sums it up perfect on why php is GOOD , PHP is built to keep chugging along at all costs. When faced with either doing something nonsensical or aborting with an error, it will do something nonsensical. Anything is better t
<imbrandon>  /endquote :)
<m_3> yolanda: is that against oneiric or precise?
<m_3> (the postgresql problems)
<yolanda> m_3, precise
<m_3> yolanda: it hasn't been tested against precise yet... that's later this week hopefully :)
<m_3> we're about to test all charms against precise to prepare for bumping the release version
<yolanda> :(
<yolanda> i was trying to build a charm that uses postgres
<yolanda> so better wait until the test is done?
<jcastro> m_3:dang, that ilo card is $85
<m_3> jcastro: yup... I was kinda pissed when I realized it wasn't built-in
<m_3> still rockin setup though... and pretty cheap even with the ilo card
<avoine> yolanda: can you show us the line that is broken?
<m_3> yolanda: maybe I can test that one manually early... it shouldn't be too hard of a fix
<m_3> the postgresql charm is kinda naive in some of what it does... needs some love from a pg expert :)
<yolanda> i pasted the postgres log here: http://paste.ubuntu.com/923758/
<m_3> sounds like the host-based-access template is barfing
<yolanda> let me check the config line
<m_3> heck, could probably even be disabled depending on the app
<avoine> yeah it needs IP adresse I think
<avoine> or a valid domain name
<yolanda> that one: host all openerp 192.168.122.122 md5
<m_3> hmmm... that looks correct
<m_3> lemme check the charm
<m_3> oh wait... that's strange
<m_3> the 192.168.122.122 should say 192.168.122.122/0
<m_3> yolanda: you can tweak this behavior on line 13 of hooks/db-relation-changed
<yolanda> let me test
<m_3> yolanda: perhaps make the change manually on the instance first to test it, then if that works change the charm
<yolanda> ok
<m_3> might even need to be 192.168.122.122/32 or 192.168.122.0/24
<yolanda> line 13 shows just an "exit 0" for me
<yolanda> let me see if i have to update the charm
<shazzner> does anyone know of this server software that allows people to write/edit books collaboratively?
 * shazzner looking for charms to tackle
<shazzner> it was called book/word-something
<shazzner> I thought I had saved the link :/
<m_3> yolanda: I was looking at an older version on my laptop... try line 17
<shazzner> ah-ha Booktype that's it
<m_3> yolanda: try to manually edit pg_hba.conf and add the /32 to the ip addr... see if pg will restart
<m_3> if so, that can be added to the charm.  the mask was there in the natty version, but was _removed_ from the oneiric version of the postgresql charm
<yolanda> m_3, seems that postgresq starts when i modify the 2 final lines:
<yolanda> host openerp openerp 192.168.122.122/0 md5
<yolanda> host all openerp 192.168.122.122/0 md5
<m_3> shazzner: gitolite and pandoc works really well for that
<shazzner> m_3: ah cool
 * shazzner checking out pandoc
<m_3> yolanda: so the /0 needs to be added _back_ to the precise version of postgres (I'm confused)
<shazzner> this is booktype: http://www.sourcefabric.org/en/booktype/
<avoine> shazzner: https://github.com/esetera/Booki.git
<yolanda> m-3, yes, seems that it's the solution
<m_3> shazzner: http://johnmacfarlane.net/pandoc/epub.html
<avoine> oh booki is the old version of booktype
<yolanda> m_3, so something like ${remote_host}/0 must be there?
<m_3> yeah, let me copy you the line in the older version.. "host $(get_database_name) ${user} ${remote_host_ip}/0 md5"
<m_3> although /32 would make more sense honestly
<m_3> s/remote_host_ip/remote_host/
<m_3> shazzner: I use an old-school makefile with this http://paste.ubuntu.com/923826/
<yolanda> m_3, what's the last line? don't understand
<shazzner> m_3: pandoc is nice just saw there is an emacs mode too
<m_3> yolanda: substitute "remote_host_ip" with "remote_host" in the line above... (remote_host_ip is old)
<m_3> shazzner: and you can embed latex if you're into that sort of thing
<m_3> shazzner: it's a little more manual... and you collaborate using revision control just like source... but I like it in practice
<m_3> also easy to retarget to docbook or other such nonsense that publishers might want
<shazzner> m_3: yeah it's rad, I'll save that for later
<avoine> yolanda: are you using the last version of the openerp charm?
 * shazzner prefers local scripts to large server software
<shazzner> still though I think booktype would make a good charm
<m_3> absolutely
<shazzner> I'm reading that it requires postgresql, sounds like there are some bugs in the charm though?
<yolanda> m_3, there is not remote_host_ip line in the hook...
<yolanda> avoine, it's just a new openerp charm that i'm writing for 6.1
<yolanda> we are also creating a new package for it, it's a kind of collaboration with openerp
<yolanda> so the existing openerp charm isn't the same
<shazzner> good heavens, the booktype install instructions is a book unto itself
<m_3> yolanda: yes, sorry... I was pasting a line from an older version of the charm.  That version used a variable that's no longer there, so I was trying to say "replace that variable with the new variable" in the line I pasted
<yolanda> oh, sounded very strange to me :)
 * m_3 just tripping over my words :)
<yolanda> so i just add a /0 or a /32 at the end?
<m_3> I'd try /32 because that makes the most sense for a netmask... but /0 seems to work
<m_3> the /0 is safest b/c you've already tried that and it worked
<avoine> yolanda: I'm the author of the current openerp charms so don't be afraid to tell me if you want me to change anything to make it fits your new packages
<yolanda> avoine, the thing is that the charm is totally different, because openerp 6.1 has server and web integrated
<yolanda> and also it's relying on a new package that we are building
<yolanda> i wanted to push it as soon as i had it working
<m_3> and I wrote the first postgresql charm... and haven't been able to find someone who knows something about postgresql to own it :)
<m_3> happy to make whatever changes are necessary
<yolanda> i'll do that change and try
<avoine> m_3: I had a suggestion in this bug but I'm not sure if it could break something 902672
<avoine> hum, I thought the irc bot would catch it, the link: https://bugs.launchpad.net/charms/+source/postgresql/+bug/902672
<_mup_> Bug #902672: Postgresql pg_hba.conf needs IP addresses <postgresql (Juju Charms Collection):In Progress by mark-mims> < https://launchpad.net/bugs/902672 >
<m_3> avoine: right... sorry that's been back-burner
<m_3> avoine: I think the biggest problem with this is different providers behaving differently.  I think relation-get private-address is now returning an actual ip addr on ec2 and lxc
<yolanda> well, leaving now, it's quite late here...
<avoine> m_3: yes
<yolanda> thanks for all
<avoine> and the patch won't work on IPv6 only network
<m_3> avoine: I'll check to see if this was actually fixed in core or is still broken for other providers (openstack)
<bkerensa> jcastro: you should ask people to take photos with their juju shirts :)
<_mup_> juju/must-specify-relation-name r517 committed by jim.baker@canonical.com
<_mup_> Merged trunk
<robbiew> jcastro: http://gigaom.com/cloud/twitter-open-sources-its-mysql-secret-sauce/
<jcastro> I saw, he told me that at posscon but didn't say which
<robbiew> https://github.com/twitter/mysql
<robbiew> SpamapS: ^
<jcastro> I hope you are putting that into a PPA right now Clint, chop chop!
<jcastro> :)
 * robbiew spends time reading the news while his maas deploys
<yagey> hello.  embarrassing first question/problem:  juju bootstrap Could not find AWS_ACCESS_KEY_ID 2012-04-10 20:28:41,619 ERROR Could not find AWS_ACCESS_KEY_ID
<SpamapS> twitter's mysql?
<SpamapS> nice
<yagey> I have set the values in my ~/.profile
<robbiew> I *think* I'll be ready for my first juju deploy into my maas...woot
<jcastro> SpamapS: they just OSSed all their internal tweaks to mysql
<jcastro> yagey: you need to put that in your .juju/environments.yaml file
<SpamapS> I do plan to gather some support tomorrow for a group of smart people getting together to have one packaging branch for all of the forks.
<jcastro> yagey: https://juju.ubuntu.com/docs/getting-started.html
<yagey> i've tried setting access-key and secret-key in environments.yaml too, same error
<jcastro> under "configuring your environment"
<jcastro> SpamapS: he told me they wanted them out there, they don't plan to maintain their own fork etc, they're just going to roll with whatever percona, maria, etc. and co do with them
<jcastro> yagey: hmm weird, that has always just worked for me
<yagey> i removed from .profile and added to env.y, different error sorry - ubuntu@ip-10-194-250-210:~$ juju bootstrap                                       2012-04-10 20:39:19,137 INFO Bootstrapping environment 'sample' (type: ec2)... SSH authorized/public key not found. 2012-04-10 20:39:19,291 ERROR SSH authorized/public key not found.
<SpamapS> "Update: Someone alerted me that Facebook does have a site for sharing its MySQL patches, although itâs hosted at Launchpad."
<SpamapS> although.. as in.. thats a reason not to acknowledge it?
<SpamapS> Does the author realize that *mysql* is hosted on launchpad?
<yagey> ah - I ran:  ssh-keygen -t rsa -b 2048    that fixed it
<yagey> I should be able to run juju on an ec2 instance?
<hazmat> yagey, yes
<yagey> i see, it started an 'small' ec2 instance, which is not in the free tier :(
<imbrandon> SpamapS: yea but the world dont see that ( LP vs Github ) kinda like someone bitching about PHP again, php still powers the internet just as Github hosts more opensource code than any other place on the planet, even though Django or Plone or Launchpad or what ever may have the technical upper hand
<imbrandon> sucks but its the way of the west
<imbrandon> ( and yes github now has more code than sourceforge )
<imbrandon> Launchpad was tied to Ubuntu only for too long, it will take a ton of momentum to upheave that in the collective mindset of the internets
<yagey> is there a way to have it use micro instances instead?
<imbrandon> yagey: you should be able to set the deafult int he env.y
<yagey> ok, rtfm https://juju.ubuntu.com/docs/constraints.html
<hazmat> yagey, yeah.. you can set default-instance-type: t1.micro in your aws environment section of environments.yaml.. or if your using a recent version (ppa) it can be specified via cli --constraints="instance-type=t1.micro"
<hazmat> yagey, be aware that the micro instances are severely penalized on cpu usage..
<yagey> in terms of performance?  but they are free :)
<imbrandon> 1 is free
<hazmat> we tried using them as the default for ec2 for a little while, but they were basically broken wrt to predictable performance.
<marcoceppi> You get 750 hours of free micros a month, which ends up being about 1 micro a month
<yagey> only 1? it doesn't mention that on the page
<yagey> ah that makes sense, but I won't run for a whole month yet
<marcoceppi> yagey: then you can probably get away with deploying a few things on micros and removing them when done, be aware that there are a lot of issues with bootstrapping and deploying on smalls because of their CPU constraints
<marcoceppi> It's a hit-or-miss situation
<yagey> ok, thanks
<imbrandon> you can just try things out too on local lxc containers
<yagey> i saw that page, didn't seem simple enough to start with however
<imbrandon> if your just kicking the tires
<yagey> I don't see a page showing the environment.yaml possible options in the dev documentation section
<marcoceppi> yagey: http://askubuntu.com/questions/65359/how-do-i-configure-juju-for-local-usage
<imbrandon> ok so hazmat , marcoceppi mentioned something ealier that i'd like to dig into deeper ( esp since hp cloud comes out of beta here very soon , as in we have a date now ) soooo instead of using s3 container for the bootstrap store the stuff on the bootstrap node and how do i go down that rabbit hole not being familiar with the juju internals at all at this point but more than willing to get my hands dirty , even with putting code where my mouth is
 * marcoceppi has secretly been talking to HP. Tuns out their storage doesn't have S3 compatibility turned on but has EC2 compatibility enabled for compute.
<imbrandon> yea i got the same thing from pete just now
<hazmat> yeah.. i saw that re storage is native, and compute has ec2 compat..
<yagey> LXC doesn't appear to work on a virtualbox ubuntu VM, as the network is already virtual
<imbrandon> the nic on the vm should be seen as normal by the os
<imbrandon> if its bridged
<hazmat> imbrandon, so the client uses the storage to identify where the bootstrap node is..
<hazmat> so running the storage on the bootstrap node becomes a bit more challenging given that
<hazmat> another option would be to play mix and match storage and compute with an ostack native storage
<imbrandon> right, so a) i write it to work with swift, or b) make the bootstrap node info avail some other way , least that how i understand it, not having looked at code
<hazmat> imbrandon, yup that's the nutshell.. or c) ask hp to turn on the swift s3 middleware
<imbrandon> hehe been there, i dont think that will happen iirc
<imbrandon> leaste not comming from me
<marcoceppi> I had them open a ticket on it for tracking
<marcoceppi> not sure if it'll happen
<hazmat> yeah.. probably not.. but you never know.. i'm why they would do one but not the other.. i take it the s3 swift middleware perhaps isn't as battle tested at scale perhaps..
<imbrandon> so just how intracate is the s3:// code, i mean can either of you point me to the proper digging point i can see just how much of a pain it would be two accomidate swift
<imbrandon> hazmat: well i'm told that even the compute will go away is their hopes, i dont see that, amazon has won this round
<imbrandon> but you know
<hazmat> imbrandon, its pretty simple.. 3 methods (get/put/get_url) -> juju/providers/ec2/files.py
<imbrandon> sweet, ok that dont sounds terrible
<marcoceppi> hazmat: so would you recommend making juju/providers/openstack/ for this?
<hazmat> the get_url needs to create a url that can be used sans credentials.. with s3 its done via their url signing mechanism.. swift also has this capability.. but its in a middleware piece
<marcoceppi> I feel it would be better to create a native openstack provider rather than just modifying the current
<hazmat> i don't know that its enabled on hp's storage
<hazmat> ie.. this one.. http://swift.openstack.org/misc.html#module-swift.common.middleware.tempurl
<imbrandon> i was thinking about just translating the ruby gem hpcloud cli parts to python
<hazmat> imbrandon, its just ostack native api.... ostack distributes native python clients
<hazmat> afaik re hp's impl of it
<imbrandon> oh then even better
<imbrandon> yea'
<imbrandon> if its just pure
<SpamapS> hazmat: couldn't we also just use groups to find the bootstrap machine?
<hazmat> imbrandon, the juju code base is twisted though.. its like python.. but sorta of  not :-(
<hazmat> SpamapS, that would work as well
<SpamapS> I mean, do we really need file storage for *that* ?
<hazmat> SpamapS, good point
<hazmat> SpamapS, we just need some discovery mechanism
<SpamapS> I'd love to see an 'hpcloud' provider which just hack-xtends the ec2 provider to implement the file storage.
<imbrandon> heh
<hazmat> SpamapS, at that point why not just a native ostack impl?
<SpamapS> which would then be useful for any cloud that doesn't have S3
<SpamapS> hazmat: because thats a lot more code!
<hazmat> SpamapS, ie. providers/openstack per marcoceppi's suggestion
<SpamapS> Well yes call it whatever you want
<SpamapS> call it "halfec2" or "franken-ec2"
<imbrandon> well os would work on rackspace too even though they have s3 middleware rtight
<hazmat> SpamapS,  not nesc. it could be incomplete if we allow for storage spec independently.. but yeah.. that kinda of gets ugly
<marcoceppi> so, just do providers/hpcloud until someone comes out with a full openstack native provider?
<SpamapS> I wouldn't call it openstack unless you do the full OSAPI implementation
<hazmat> didn't someone start one of those..
<SpamapS> hazmat: the openstack guys were interested
<imbrandon> i thought that swift was
<SpamapS> one of the nova core devs
<imbrandon> osapi
<marcoceppi> I wish I was better at python or knew go
<imbrandon> why go ?
<marcoceppi> that's what juju will be written in soon, iirc
<imbrandon> ahh, somone got google fever, why not just full nodejs then
<SpamapS> imbrandon: uh.... nodejs and go have like, nothing in common.
<imbrandon> go compiles to js
<SpamapS> dart
<SpamapS> thats dart
<SpamapS> go compiles into native statically linked binaries
<imbrandon> ... >.> /me might be mixed up
<SpamapS> with a big runtime
<SpamapS> well
<SpamapS> a tiny runtime actually
<SpamapS> anyway, yeah, go, not dart
<marcoceppi> php \o/
<SpamapS> imbrandon: lp:juju/go
<imbrandon> :)
 * imbrandon high fives marcoceppi 
<imbrandon> SpamapS: yea i'll have to look, no idea how i equated the two
<SpamapS> imbrandon: well apparently google likes to invent languages :)
<imbrandon> maybe dart in written in go ? /me is even more confused now
<SpamapS> and go was such an odd choice.. like, how about call it 'be' or 'have'
<imbrandon> lol
<imbrandon> marcoceppi: and languages are just syntax, once you learn one, picking up new ones is easy :) its what keeps us relevant
<SpamapS> yeah, I like go actually
<imbrandon> is it whitespace depdant ?
<SpamapS> makes me feel like I'm reading C, but I don't have to get all sideways on pointers
<SpamapS> no
<SpamapS> imbrandon: its very C-like
<SpamapS> imbrandon: but it has an automatic formatter thingy that keeps all go indented the same
<imbrandon> sweet i like c like
<marcoceppi> I went through the go demo thing, got to chapter 20 something
<imbrandon> yea as long as its not whitspace sensative i can learn to love it, its my ONLY problem with python
<SpamapS> whats so bad about whitespace sensitive? It makes all python readable, even written by the most junior coders.
<imbrandon> auto formatting is good, i am anal about that in my own code
<imbrandon> even php
<imbrandon> but i dont want code to break because of it
<imbrandon> 2 spaces per indent, no tabs
<imbrandon> ever
<imbrandon> brackets are a must, even for one liners
<imbrandon> :)
<imbrandon> people hate when i nit pick their php
<imbrandon> but it makes it so i can maintain shit i wrote 5 years ago easy
<SpamapS> imbrandon: right, which is why people like python. There is no nit picking.
<SpamapS> it works, or it doesn't
<imbrandon> heh
<SpamapS> anyway, language debates are outlawed for the next 30 minutes so I can get something worthwhile done
<imbrandon> roger :)
<SpamapS> (like save all these daisies for these pirates)
<hazmat> http://www.pirateslovedaisies.com/ :-)
<imbrandon> zomg
<imbrandon> ok sooo back to the main question, do i start a hp cloud provider and just copy most of the code from ec2
<imbrandon> as bloilerplate
<imbrandon> and fix the storage or what was that aobut groups
<SpamapS> imbrandon: I would import the ec2 provider's machineprovider object, and just write my own filestore
<SpamapS> imbrandon: please don't copy/paste! ;)
<imbrandon> or import
<imbrandon> :)
<SpamapS> imbrandon: I know you're used to PHP and namespaces are all new and sexy.. but they are built in to python :)
<imbrandon> hahaahh
<imbrandon> i know i know :) i dont even have to name them, just have the file names ;)
<imbrandon> just cuz i dont like python dont mean i cant hobble some togather :)
<imbrandon> man i could get used to ubuntu app design again, css3 for skins now go instead of python
<imbrandon> its like your after my own heart
<imbrandon> pypm install launchpadlib
<imbrandon> gah
<robbiew> SpamapS: so...if I have juju installed from the ppa and do a bootstrap, my node should have the same version of juju...right?
<robbiew> testing maas...and my node is getting juju from the archive :/
<robbiew> even though I have the ppa setup on my local machine
<imbrandon> what about in the env.y
<robbiew> that's empty
<robbiew> wait..no
<robbiew> I actually added juju-origin: ppa
<imbrandon> ahh then smells like a bug
<robbiew> b/c w/o that line, it still did the same :/
<robbiew> http://www.sadtrombone.com
<hazmat> ugh.. pypm.. active state ugh..
<imbrandon> hazmat: easy way to get launchpadlib on osx
<hazmat> robbiew, so checked your apt-cache output, the code parses it correctly.. which odd
<hazmat> imbrandon, pip/easy_install are builtin on osx as well
<SpamapS> robbiew: the PPA auto-detect thing may not be working right
<hazmat> robbiew, so even with the explicit juju-origin its installing from the archives?
<robbiew> yes
<robbiew> 0.5+bzr504-0ubuntu1 on the bootstrap node
<robbiew> 0.5+bzr518-1juju5~precise1 on my machine
<hazmat> robbiew, if you login into the instance can you pastebin /var/log/cloud-init-output.log  and /var/lib/cloud/instance/user-data.txt
<robbiew> sure thing
<hazmat>  i put some code in the latest trunk on r517 to log origin on bootstrap, but with the apt-cache policy juju you sent me earlier, the auto detect code did the right thing.. but even ignoring that.. getting juju-origin switched out is bogus..
<hazmat> if its explicitly specified
<hazmat> robbiew, hmm.. i wonder.. does maas reinstall when machines get recycled on destroy-environment/bootstrap.. or just stop/start the machine.
<robbiew> hazmat: good question...i don't believe it reinstalls
<robbiew> hazmat: http://pastebin.ubuntu.com/924048
<hazmat> robbiew, hmm.. 2012-04-10 17:46:34,398 - cc_apt_update_upgrade.py[WARNING]: Source Error: ppa:juju/pkgs:add-apt-repository failed
<robbiew> hmm
<robbiew> maybe network blip
<hazmat> so the ppa fails, and when it goes to install it latter it  gets distro.. that makes more sense
<SpamapS> doesn't reinstall?
<SpamapS> thats a violation of the basic expectations that juju offers charm authors IMO.
<SpamapS> Otherwise charms need to get even more careful about idempotency
<imbrandon> Why is Petyr Baelish called "LittleFinger" ?
<hazmat> idempotency doesn't nesc. help here, without the reinstall the disk just becomes an accumulated dumping ground,   juju expects thats releasing a machine will make it go poof, destroy-env doesn't attempt to cleanup disks, so you'd have potentially old agents still running on the newly booted machine.
<robbiew> ok...I'm going to blow away my maas and start fresh...luckily I've done that a bazillion times now, so it's painless :P
<hazmat> imbrandon OT but..  http://gameofthrones.wikia.com/wiki/Petyr_Baelish
<imbrandon> hazmat: haha yea, i seen it on a adv banner, thought it was clever , winter is def comming
<SpamapS> imbrandon: because he's from the fingers and he's little
 * SpamapS has found himself getting more and more obsessed with GoT
<imbrandon> same here. ive actualy read ahead in the books now, just waiting on the story to play out on tv
<SpamapS> yeah, I'm on book 5
<imbrandon> i'm not quite that far, i think starting 4 its been a few months
<imbrandon> i'm 1 past where last season ended
<imbrandon> and that was 3 i think
<SpamapS> yeah, from the previews, it looks like season 2 may be a bit more than book 2
<imbrandon> yea, i just watched the first eps of season 2 lasty night on dvr
<marcoceppi> Holy crap, that's quite a loud depreciated warning
<marcoceppi> I guess no more default-instance-type
<SpamapS> what
<SpamapS> oh its all wrapped up in constraints now?
<marcoceppi> http://paste.ubuntu.com/924092/
<SpamapS> I'm guessing constraints is what broke existing bootstrapped envs from using the PPA
<marcoceppi> yeah, I was just surprised how verbose that warning was
<marcoceppi> NO! It's not in autocomplete
<marcoceppi> unless, you can't bootstrap with constraints?
<SpamapS> we need to update the bash completion to call juju the way bzr does
<marcoceppi> can I run juju set-constraints before bootstraping, or is that something that needs to run after you boostrap?
<SpamapS> marcoceppi: I think it has a --constraints
<marcoceppi> ah, the wiki page has the info
<marcoceppi> sweet
<marcoceppi> I searched everywhere but the obvious place
<marcoceppi> SpamapS: so, since I have the latest ppa version, how do I do placement: local?
<jcastro> aha!
<marcoceppi> is that still in the environment.yaml
<jcastro> updated documentation!
<jcastro> so used to having it be out of date that now you have no idea what to do!
<imbrandon> SpamapS: dude, GO is typed PHP 5.4
<imbrandon> like for real
<SpamapS> http://cdn.memegenerator.net/instances/400x/18261941.jpg
<imbrandon> 404
<SpamapS> not here. :-P
<lifeless> 500
<imbrandon> ERROR
<imbrandon> The request could not be satisfied.
<lifeless> indeed
<imbrandon> btw moins lifeless :)
<imbrandon> mmm i could get used to this though, /me downloads the dev kit
<SpamapS> imbrandon: golang 1 is in precise
<imbrandon> i know i know, i need to install precise, i tried this morning actually, and failed, almost took out my osx partition
<imbrandon> was quite miffed , will try again in the monring
<SpamapS> I install using rEFIt and unetbootin .. never have problems that way
<robbiew> hazmat: sorted it....networking related...routing between my maas setup and the internet was broken
<yagey> when I deploy a mongodb instance, and another via add-unit - it should configure a replica set right?
<imbrandon> SpamapS: yea i couldent get the precise image to boot from refit
<imbrandon> just kept recycling
<imbrandon> but i grabbed the one "made for mac" i'm grab the normal x64 one and try again
<SpamapS> yagey: yes it should
 * SpamapS bumps negronjl to write a README for mongodb
<SpamapS> negronjl: ^^
<yagey> it doesn't.  I'll try debug-log to see what up
<negronjl> SpamapS: I'll get to it in just a minute :)
<yagey> does juju need to run as root or sudo?
<SpamapS> yagey: the juju client should be fine running as your user
<SpamapS> yagey: the agents run as root, so the charms run as root
<yagey> 2012-04-10 16:14:14,908 Machine:0: twisted ERROR: Copy /var/cache/lxc/oneiric/rootfs-amd64 to /var/lib/lxc/yagey-sample-0-template/rootfs ...  2012-04-10 16:14:14,912 Machine:0: twisted ERROR: Copying rootfs to /var/lib/lxc/yagey-sample-0-template/rootfs ...Please change root-password !
<lifeless> SpamapS: so whats the memgenerator url (page, not jpg)
<SpamapS> http://memegenerator.net/instance/18261941
<SpamapS> so not as funny 10 minutes later
<lifeless> :)
<m_3> yagey: I tested mongo replicasets using three nodes at a time (juju deploy ... -n3 local:mongodb)
<m_3> yagey: didn't do a lot of starting one and then adding others one at a time... I'll bet that takes longer
<yagey> thanks, it appears to be a lxc issue with dependencies
<m_3> yagey: should still work eventually though
<m_3> ah, gotcha
<SpamapS> yagey: charm missing some deps?
<yagey> 2012-04-10 16:14:14,920 Machine:0: twisted ERROR: The following packages have unmet dependencies: 2012-04-10 16:14:14,920 Machine:0: twisted ERROR:  ubuntu-minimal : Depends: netcat-openbsd but it is not going to be installed 2012-04-10 16:14:14,920 Machine:0: twisted ERROR: E: Unmet dependencies. Try 'apt-get -f install' with no packages (or specify a solution). 2012-04-10 16:14:14,928 Machine:0: twisted ERROR: failed to execute template
<SpamapS> weird
 * m_3 suspects incomplete lxc images... let it finish caching the first image (?)
<negronjl> SpamapS: MongoDB README file done ... let me know if it's good enough
<SpamapS> negronjl: you type fast ;)
<yagey> *m_3 any suggestions?  what kind of image?
<negronjl> SpamapS: I had it done already just forgot to push it ;)
<negronjl> SpamapS: err .. I mean .. yeah I typ really fast :)
<yagey> spamaps - can I see that README?
<SpamapS> yagey: yes, just pull the latest version of the charm
<SpamapS> yagey: should be visible here soon: http://jujucharms.com/charms/oneiric/mongodb
<m_3> crap yagey left
<m_3> hope that got worked out
#juju 2012-04-11
<adam_g> using PPA, is 'instance-state' meant to reach the 'started' state, or stop at 'provisioned' ?
<hazmat> adam_g, started
<hazmat> robbiew, cool, good to know thanks
<adam_g> hazmat: hmm. at least the bootstrap node of this new environment only made it as far as 'provisioned' with orchestra,but seems otherwise fine
<hazmat> adam_g, hmm.
<hazmat> adam_g, looking at the orchestra provider, it looks like its either 'provisioned' or 'pending'
<hazmat> so no 'started' staet..
<hazmat> probably worth a trivial for consistency
<hazmat> except its not really checking that the machine is started
<adam_g> hazmat: right, i knew it hit 'provisioned' as soon as it finishes its preseeded install. thought it went to 'started' in the past, though, after first boot
<hazmat> adam_g, apparently not.. does orchestra provide that info?
<hazmat> the code reads.. state = "pending" if d["netboot_enabled"] else "provisioned"
<hazmat> not sure what other info is available from cobbler wrt to the actual state
<hazmat> interesting yet another cobbler fork.. http://code.google.com/p/madhatter/
<hazmat> i don't see anything more appropriate for actual status looking over the internal data struct.. https://github.com/cobbler/cobbler/blob/master/cobbler/item_system.py
<SpamapS> cobbler only knows that it has been setup for pxe boot
<SpamapS> and can be asked what the power state is *if* it has power control setup
<jcastro> SpamapS: hey, my mini proliant came in
<jcastro> you want one
<jcastro> it is SO cute.
<jcastro> I think everyone should have one
<marcoceppi> I know what the charm contest winners can put their gift card towards
<adam_g> those mini proliants are great
<adam_g> and cheap
<bkerensa> adam_g: but are they energy efficient?
<bkerensa> oo 150W
<bkerensa> two PCI slots so you could turn it into a little RADIUS box
<yolanda> hi, can i get some help with a charm? i have a db-relation-joined hook with postgresql, and seems that my hook hasn't been called. If i debug and call it manually my charm is working, if not, seems that the code isn't triggered
<yolanda> hi m_3, are you available? i found a problem in the openerp&postgres hooks
<_mup_> juju/trunk r521 committed by kapil.thangavelu@canonical.com
<_mup_> [trivial] merge provider-error status fix [a=bac][r=hazmat]
<yolanda> hi, what's correct way of restarting a service in a juju charm?
<yolanda> i have something like that:
<yolanda> # restart service
<yolanda> service openerp-core start || service openerp-core restart
<yolanda> however, it doesn't seem to be stopping openerp-core, it says that the process doesn't exist. If i find it manually using a ps -ef, do a kill and start the service, then works
<yolanda> hi, need help with a juju charm, i'm having problems restarting a service
<sc-rm> yolanda: where does your restart part of you gem look for the pid of the process and is that pid stored there when startet through your charm?
<yolanda> sc-rm, there is an openerp-core service, that is started on the install hook
<yolanda> and in the db-relation-changed, i try to restart it
<yolanda> because i need some changes to be applied on a config file
<marcoceppi> yolanda: How is it started in the start hook?
<marcoceppi> err, in the install hoook*
<sc-rm> youlanda: so openrp-core is a standard script in the init.d ? aka you are able to do /etc/init.d/openrp-core start|stop|restart and it works?
<yolanda> sc-rm, yes
<yolanda> the service is started automatically when installed the package
<yolanda> however, do i need to also provide an start hook that starts the service? or if it's done in the install is enough?
<sc-rm> youlanda: okay, but if you log into that machine are you then able to do /etc/init.d/openrp-core stop and the process stops?
<yolanda> sc-rm, not at first try, if i do it it says that the process doesn't exist, but i can do a "ps -ef | grep openerp" and kill it
<yolanda> then if i do the service openerp-core start it works, and after that i can restart correctly
<yolanda> seems that there is some problem between install and db-config-changed hooks
<sc-rm> youlanda: ahe, there is the problem, the way the service is started is not storing the pid in the same place as the start script expect it to be located
<sc-rm> the openrp-core script in init.d is that something you created and is it public accessible?
<yolanda> sc-rm, the openerp-core is a package that we are writing for openerp 6.1 version
<yolanda> i can send you the url for you to check it
<yolanda> sc-rm, code for the package is here: https://code.launchpad.net/~openerp-community/openerp-tools/yolanda.robla-openerp-core
<yolanda> sc-rm, any idea?
<yolanda> hi, any help with the openerp start script, please? i'm just blocked with that, only i need to restart service correctly to finish the charm
<sc-rm> youlanda: it seems okay to me, if you do install do you then see the same pid in /var/run/openerp-server.pid as when doing ps ?
<yolanda> let me test
<yolanda> sc-rm, pid is the same in /var/run/openerp-server.pid as the ps
<al-maisan> Can I use environment variables in environments.yaml? E.g. for secret keys?
<yolanda> sc-rm, but when i run the db-relation-changed hook, i see different pid in /var/run/openerp-server.pid and in ps
<yolanda> so i'm not able to kill old process and start new
<yolanda> sc-rm, i found some problem. I have this line in my db-relation-changed hook: service openerp-core start || service openerp-core restart
<yolanda> when i run it, pid in ps remains the same, but pid in /var/run/openerp-server.pid changes
<yolanda> and process isn't restarte
<yolanda> restarted
<avoine> yolanda: it should not be: service openerp-core start && service openerp-core restart
<avoine> because you want to restart only if the start doesn't worked
<yolanda> yes
<yolanda> well, the start should not work because process should be already started by the last step
<avoine> oh no, your right
<avoine> my logic fail in the morning
<yolanda> avoine, the probem i face is that when i do it, pid in ps isn't changed, but pid in /var/run/openerp-server.pid changes
<al-maisan> hmm .. did anubody see a "ERROR Invalid SSH key" upon sudo juju status -e oneiric-aws? The bootstrap command finished successfully
<al-maisan> anybody even
<yolanda> sems that the process isn't restarting correctly, but if i kill process by hand and then run it, it starts to behave correctly, and also starts/stops correctly
<marcoceppi> al-maisan: do you have either an id_rsa.pub or id_dsa.pub in your .ssh folder?
<avoine> yolanda: maybe a permission problem
<al-maisan> marcoceppi: I have a symbolic link as follows: " id_rsa.pub -> juju_rsa.pub"
<al-maisan> in ~/.ssh
<yolanda> avoine, the process is running as openerp user, perhaps is that?
<yolanda> but it's the same also if i run as root
<marcoceppi> al-maisan: that error is usually indicative of your public key not be copyied to the bootstrap
<avoine> yolanda: if it's an upstart script it should not be a problem
<al-maisan> marcoceppi: removed the symbolic links and renamed the files, trying again
<yolanda> yes it is
<marcoceppi> al-maisan: if it works, what you can do then is add just juju_rsa.pub to authorized-keys in the stanza if you wish to segregate it from your standard id_rsa
<avoine> yolanda: have you try  start openerp-core || restart openerp-core just in case
<al-maisan> marcoceppi: will try
<yolanda> avoine, unknown job
<avoine> hum
<al-maisan> hmm .. marcoceppi .. made no difference .. still getting "ERROR Invalid SSH key"
<marcoceppi> al-maisan: are you using the juju PPA?
<al-maisan> I am on 12.04 and using the juju package
<al-maisan> marcoceppi: I also get this: http://pastebin.ubuntu.com/924951/
<al-maisan> is that normal?
<marcoceppi> al-maisan: yes
<al-maisan> hmm .. no luck .. I get the "ERROR Invalid SSH key" no matter what I try
<al-maisan> .. and I am supposed to demo this in 2 hours :-P
<al-maisan> marcoceppi: I see that jdstrand had a similar problem http://irclogs.ubuntu.com/2012/03/22/%23juju.txt
<marcoceppi> al-maisan: can you log in to ec2 get the address for the bootstrap and try to ssh -vv ubuntu@ec2url ?
<al-maisan> will try
<al-maisan> debug1: connect to address 50.19.3.207 port 22: Connection refused
<marcoceppi> al-maisan: that looks like a bug, but it might already be fixed in the PPA
<al-maisan> I am running Version: 0.5+bzr504-0ubuntu1
<al-maisan> should I install from the PPA?
<marcoceppi> Try the ppa, and add "origin: ppa" to the environments stanza you're using
<hazmat> al-maisan, can you pastebin the console output from the instance?
<marcoceppi> *in the interest of getting it ready for your demo*
<al-maisan> I try "ec2-get-console-output i-84af18e3"
<al-maisan> Only getting:
<al-maisan> i-84af18e3
<al-maisan> 2012-04-11T14:59:12+0000
<al-maisan> quietsch ~ $
<al-maisan> ignore the last line please
<al-maisan> OK .. upgraded to 0.5+bzr519-1juju5~precise1
<marcoceppi> al-maisan: make sure to update your environments.yaml file to use ppa as well
<al-maisan> yup
<yagey> monring.  is there a way to juju ssh into the bootstrap host?   I apparently need to run some lxc apt updates
<marcoceppi> yagey: juju ssh 0
<al-maisan> marcoceppi: when I try to connect to the instance from the aws web page it says: "Instance is not associated with a Key Pair. "
<yagey> that's what I hoped.  ^Cyagey@yagey-VirtualBox:/tmp$ juju ssh 0 2012-04-11 08:13:51,250 INFO Connecting to machine 0 at localhost ssh: connect to host localhost port 22: Connection refused
<al-maisan> marcoceppi: "juju ssh 0" gets me into  the aws instance..
<marcoceppi> al-maisan: does juju status work?
<al-maisan> yes!
<al-maisan> now it works
<marcoceppi> al-maisan: success! you should be good to go for your demo
<al-maisan> yeah !!
<marcoceppi> yagey: have you restarted since the last time you tried to connect?
<al-maisan> marcoceppi: thanks for your help!
<yagey> rebooted? I'm using lxc, won't that kill the bootstrap instance?
<marcoceppi> yagey: it will, I was just making sure you haven't
<al-maisan> marcoceppi: I think I know what went wrong all the time: I was using *sudo* with juju all the time :-P
<marcoceppi> al-maisan: ah! that would probably be the reason, good to know it wasn't a bug *bug* :)
<al-maisan> yeah .. sorry for the noise :-/
<yagey> my mongo install is failing with: 2012-04-11 08:09:56,966 Machine:0: twisted ERROR: The following packages have unmet dependencies: 2012-04-11 08:09:56,966 Machine:0: twisted ERROR:  ubuntu-minimal : Depends: netcat-openbsd but it is not going to be installed - any suggestions  please?
<yagey> how about this one? juju deploy tomcat7 Invalid options specification: options.multicast.type: expected 'string', got 'boolean' 2012-04-11 08:39:14,215 ERROR Invalid options specification: options.multicast.type: expected 'string', got 'boolean'
<marcoceppi> yagey: one of your charms isn't updated properly.
<marcoceppi> Or the tomcat7 isn't updated. Let me check it
<marcoceppi> Interesting, which version of juju are you using yagey?
<yagey> whichever apt-get installed
<yagey> how can I tell?
<marcoceppi> dpkg -l | grep juju
<yagey> dpkg -l | grep juju ii  ensemble                               0.5+bzr398-0ubuntu1                     transition package for rename to juju ii  juju                                   0.5+bzr398-0ubuntu1                     next generation service orchestration system
<marcoceppi> that's a pretty old version of juju, I don't think it has the updated types since boolean was added later
<marcoceppi> which is why it's saying expected string when it got boolean for the multicast configuration option in tomcat7
<yagey> ok.  upgrade from the ppa?
<marcoceppi> upgrading juju from that version to latest will likely break your current deployment though
<marcoceppi> so if you're okay with re-boostrapping then yes
<imbrandon> SpamapS: ok so you have a conf file in /etc , you change a value of that conf , should you not place the whole tested conf in the charm vs potentially untested conf with sed vaules changed after install when the maintainer of a package changes values
<imbrandon> like the example given is 8 months from now the maintainer changes a default value, i would think just as in package management you would not want that default value altomaticly propigated to the charm untill its been tested against your charm
<imbrandon> ( or anyones feedback )
<marcoceppi> dropping a configuration file from 8 months in the past against an updated package is just as reckless
<imbrandon> no, not if there is no abi change
<marcoceppi> with a greater chance for breakage as it does not account for changes to the package
<imbrandon> much less chance, its been tested, the latter way has not, thus less
<imbrandon> if i have a config with show-errors off even if thats the default, then i know errors are off, if the maintainer changes that default to on, then errors are shown in my charm and it is then broken imho
<imbrandon> if i have a known state for the software in the charm, not the full system, but what your provinding then you should own it all
<imbrandon> not the package mantainer
<marcoceppi> imbrandon: if you know you always need show-errors off then you should make sure it's set via the install hook
<imbrandon> sure, and i do as well as every other value
<imbrandon> by the whole conf
<imbrandon> known state
<marcoceppi> won't scale, and does not smell like a best practice to me
<imbrandon> it scales every day
<imbrandon> its called debian packageing
<imbrandon> it works the same way
<imbrandon> damn near every deb drops their config in place
<imbrandon> for much the same reason
<imbrandon> like i said if thats the default now, i cannot garentee its the default alwys
<imbrandon> just as you point out
<imbrandon> so i need to make sure by setting all values
<imbrandon> thats why you get the little popup when debconf see's changes to files in /etc
<hazmat> there's a reason that charms have series in their qualified name
<imbrandon> hazmat: sure thats holds up if all the software is installed via the package mgmt sysxtem and not git or gotdeb or src :)
<SpamapS> imbrandon: for my oney, you store the whole config in the charm
<SpamapS> and for my money toocurse you high latency
<_mup_> txzookeeper/managed-watch-and-ephemeral r57 committed by kapil.foss@gmail.com
<_mup_> session renewal notification, don't track eph sequence nodes, speling comments from review
<_mup_> txzookeeper/managed-watch-and-ephemeral r58 committed by kapil.foss@gmail.com
<_mup_> use a separate container node for the no track ephemeral test
<_mup_> txzookeeper/trunk r47 committed by kapil.foss@gmail.com
<_mup_> merge managed-watch-and-ephemeral a new auto reconnecting client that recreates presence nodes and triggers watch handlers on session expiration [r=fwereade,jimbaker]
<hazmat> m_3, ping
<m_3> hazmat: pong
<hazmat> m_3, i just saw r28 on charmrunner.. why did the relation check need to be disabled?
<hazmat> that's sort of the whole point..
<m_3> hazmat: I'll re-enable it wrt the new schema once environments are working (lp:~mark-mims/charmrunner/with-environment)
<m_3> hazmat: it was choking on status
<hazmat> m_3,  on the new status?
<m_3> hazmat: yup
<m_3> sorry... latency probs atm
<m_3> hazmat: there's no relation state there if the relation's good... only on fail
<m_3> hazmat: not a hard change, just in the way at the time
<hazmat> m_3,  right but that seems orthognal to the env stuff
<hazmat> k
<m_3> yup... only related by the serial processing queue :)
<hazmat> m_3, if you use go i'm sure you could do it in parallel ;-)
<m_3> :)
<m_3> need go for wetware
<m_3> whoohoo!... charm in ec2 is testing charms in ec2... not quite automated yet
<m_3> gotta probably write a separate zombie killer script to clean up if there are accidental danglers
<SpamapS> m_3: and maybe use a limited access key that can only spawn a few machines? ;)
<m_3> SpamapS: unhuh
<SpamapS> m_3: I'd hate for the graph runner to get crazy with the cheeze whiz and cost you $500 ;)
<SpamapS> s/you/somebody/
<SpamapS> cripes
<SpamapS> precise is changing faster now than it has all cycle
<SpamapS> every day I sbuild-update and get 15 packages in the base install
<m_3> gotta wait until I get a fatter pipe to update
#juju 2012-04-12
<m_3> It's sad that a "reaper killing zombies" is the less macabre of descriptions
<m_3> the alternative "reaper killing orphans" just has such a nice ring to it :)
 * m_3 channeling Scrooge McDuck
<marcoceppi> I prefer the latter :)
 * marcoceppi wanders off
<yolanda> hi all, what's the process to review a charm? we've finished a charm for openerp 6.1
<fwereade_> yolanda, File a bug against charms at https://launchpad.net/charms/+filebug . Make sure it has the tag 'new-charm', and a status of "New", "Confirmed", or "Triaged", otherwise reviewers will not see it. If you are working on the charm and not ready for reviews, remove the new-charm tag or mark the status as "In Progress".
<bbcmicrocomputer> hmm, I seem to be able to set a boolean service config value using a config file, but not directly on the command line
<bbcmicrocomputer> e.g. juju set -e local service boolean=False
<bbcmicrocomputer> am I doing something wrong?
<bbcmicrocomputer> I get 'Invalid value for boolean: 'False'' if I do it via the command line btw
<niemeyer> Hey every body
<melmoth> hi there ! When i launch "juju bootstrap", it create a new instance, but without using any predefined ssh key, how can it then log into it ?
<avoine> melmoth: it use your ssh key in your ~/.ssh/id_rsa.pub file
<melmoth> ok.
<melmoth> seems to works. it just appeard magic.
<hazmat> bbcmicrocomputer, that's a bug
<bbcmicrocomputer> hazmat: I'll file a ticket
<hazmat> bbcmicrocomputer, cool, thanks
<_mup_> Bug #979859 was filed: Unable to set boolean config value via command line <juju:New> < https://launchpad.net/bugs/979859 >
<_mup_> Bug #979879 was filed: juju status fails when machines error on startup <juju:New> < https://launchpad.net/bugs/979879 >
<imbrandon> mmm fresh 2 ltr of mt dew sitting on my desk, mmmm good way to start the day
<bac> i am getting an error that 'mem' is not a valid constraint for deploy
<bac> juju deploy ...  --constraints "cpu=8 mem=68.4G" produces
<bac> juju: error: unrecognized arguments: mem=68.4G
<bac> m_3, hazmat: want me to file a bug or have i done something wrong?
<m_3> bac: I haven't used constraints yet... no idea
<bac> m_3, ok, thanks
<marcoceppi> I haven't tried the mem constraint, only arch and instance-type
<hazmat> odd
<bkerensa> jcastro: is the price the same for UDS attendees to go to the cloud summit?
<jcastro> I believe so
<bkerensa> k
<bkerensa> jcastro: I might be interested in going so I will see :)
<jcastro> I know where you have an extra $100 to spend!
<jcastro> dunno, might just wanna ask marianna when you get there
<jcastro> I don't think the audience is going to be for people like us.
<jcastro> though I am surely supposed to go, hah
 * imbrandon thought about it as well
<imbrandon> :)
<fwereade_> bac, are you sure you got the quoting right there? the error message looks like the command was trying to interpret "mem=68.4G" as an arg on its own rather than as an element of --constraints
<bac> fwereade_: the complete command was:
<bac> juju deploy --config=$REPO/$SERIES/buildbot-slave/examples/lpbuildbot.yaml --repository=$REPO local:buildbot-slave --constraints "cpu=8 mem=68.4G"
<bac> fwereade_: but i agree, that is what it looks like is happening
<fwereade_> bac, would you run that again with juju -v deploy and pastebin me the traceback please?
<bac> fwereade_: i will when i my current run finishes
<bac> thanks
<fwereade_> bac, sweet, tyvm
<fwereade_> bac, also, please let me know what version you're running; I can't repro it myself
<bac> fwereade_:  0.5+bzr519-1juju5~precise1
<fwereade_> bac, thanks
<marcoceppi> whoa, I think I was just impressed
<marcoceppi> If you terminate a machine outside of Juju, will Juju attempt to re-spin it up?
<SpamapS> marcoceppi: I believe the provisioning agent asserts something like that
<fwereade_> marcoceppi, yeah, if the instance a machine's assigned to apparently doesn't exist it will spawn a new one
<marcoceppi> that's. awesome.
<imbrandon> nice
<imbrandon> yea thats golden
<fwereade_> I think it's done that since... before I joined, actually
<marcoceppi> fwereade_: I've never tried, but since I can't connect to a very old bootstrap due to upgrading locally from PPA, I tried to take one of the machines down I knew I didn't use
<marcoceppi> then it popped back up a few mins later, to my surprise
<fwereade_> marcoceppi, heh, maybe a double-edged sword there :(
<marcoceppi> fwereade_: yeah, but it's okay. Worth it for this corner case
<marcoceppi> It gives me piece of mind
<bkerensa> jcastro: my interest in going is so that I could bring back some knowledge and buzz to share with our loco :) we are kind of the front lines in our region so if people ask about cloud I hope to speak competently about it
<bkerensa> :D
<flacoste> if i use juju from the pkgs ppa, do I need to set juju-origina in my environment.yaml file?
<marcoceppi> flacoste: yes
<marcoceppi> juju-origin: ppa
<flacoste> thx marcoceppi
<jkyle> heya
<jkyle> is this FAQ fairly accurate? https://juju.ubuntu.com/docs/faq.html
<imbrandon> jkyle: i'd say so, although juju is a fast moving target right now so , yea
<jkyle> of note was the "not production ready"
<jkyle> though, my site isn't necessarily "production"...reasonable stability and a base level of "functional" are important hehe
<imbrandon> jkyle: well depends on your level of want to get down and dirty, a small group of us have successfully been deploying OMGubuntu.co.uk with juju for a month or so now
<jkyle> so I'm wondering is it "stable enough" to use in other than a pure testing environment?
<imbrandon> but thats not to so its not without hiccups now and then ask marcoceppi :)
<jkyle> I'd also be interested in chatting with someone from the core team, maybe in a pm?
<jkyle> I'll hit the lists up, not on the vpn for mail yet though :P
<imbrandon> the core team is all arround , most anyone in there that speaks with a little knowhow of it is on the core members , myself am just a charmer, but i'm guessing start off with your questions in public might garner a bit more of a reaction
<marcoceppi> jkyle: I think it's "stable enough" to use in things other than pure testing; it's more of a full blown beta at the moment, though pretty stable. The only problem I've really encountered is occasional breakage during updates of the juju package, though that shouldn't happen much if ever going forward
<jkyle> alright, I'll give it a shot
<marcoceppi> jkyle: based on your requirements, I'd say juju is stable enough to use, it has most of it's release features implemented and tested to my understanding
<imbrandon> marcoceppi: speaking of, promstrangulate me today  :)
<marcoceppi> imbrandon: after omg deployment :)
<imbrandon> :)
<jkyle> looking over docs. does juju do metadata collection on your nodes?
<jcastro> bkerensa: just hang out with me, marcoceppi, and imbrandon, you'll be all set.
<imbrandon> :)
<marcoceppi> jkyle: how so?
<jkyle> like chef's Ohai
<imbrandon> not really, kinda , more like it knows from the start and keeps the state
<jkyle> so you can query you infrastructure, e.g. knife search node 'role:mysql-master', returns a list of all nodes with roles mysql-master
<jkyle> that might be something worthy of adding (talking to myself)
<marcoceppi> jkyle: yes,  kind of. Juju isn't aware of anything outside of it's deployed environment. But tracks where everything is
<imbrandon> marcoceppi: is apc and the like still on ?
<jkyle> interesting... Could do some data collection in a charm and dump data into a db
<imbrandon> crap wrong window
<flacoste> i'm getting the following error:
<flacoste>   /var/lib/cloud/instance/scripts/runcmd: 4: /var/lib/cloud/instance/scripts/runcmd: juju-admin: not found
<flacoste> that's in cloud-init-output.log
<flacoste> on a node
<flacoste> after a juju bootstrap
<SpamapS> flacoste: juju failed to install further up then
<flacoste> Get:2 http://ppa.launchpad.net/juju/pkgs/ubuntu/ precise/main juju all 0.5+bzr519-1juju5~precise1 [496 kB]
<flacoste> SpamapS: doesn't look like it
<flacoste> but dpkg -L juju does say that it's not installed :-(
<hazmat> jkyle, that notion of role is basically what juju's service units are, they are all clearly identified..
<hazmat> you can query out from the cli all the machines of a given service
<hazmat> juju status mysql
<hazmat> will list out the nodes of the service named 'mysql'
<hazmat> but its not a full inventory style like ohai
<hazmat> it also doesn't expose the notion of querying adhoc from within a charm/service unit, because the  services a unit is explicitly related to other service, and as a result has private bi-directional channels with 'data bags'
<jkyle> hazmat: ah, gotcha. I also saw that only one "service" can be run per node. So your services are more like a collection of services than an aggregation of smaller services (e.g. role of roles)
<hazmat> jkyle, well a service is composed of units.. each unit representing at minimum a container, and typically a machine.. there is a separate notion of policy charms that can be deployed along side existing services... the collection and aggregation notion doesn't seem quite right.. but yes coming from a roles stamped onto machines perspective i can see where it arises.. effectively the independent unit should deliver all of the functionality required to fulfill i
<hazmat> ts charms's interfaces.. that can be as large as small as is useful.
<SpamapS> flacoste: it was downloaded, but never installed.. weird
<hazmat> jkyle, ie.. so a web app charm might setup nginx and a rails app, but and expose an http endpoint, and depend on a db. with the db fulfilled by a separate service, and a reverse proxy by another.. that's the ideal anyways.. where the service's charm is doing the important bits germane to its functionality.. but the definition of a charm is pretty flexible and its exposed interfaces is pretty flexible, it could be doing the aggregation you mentioned and have th
<hazmat> e db and proxy and multiple workers local to itself on a per unit basis (even thoughs its definitely not best practice).. the key differentation is the context independent reuse, instead of having to fork and glue a databag on to each service you want to communicate with, the charm themselves model the relationships between the services and the their communication as part of their def.
<flacoste> how can i reset a juju bootstrap?
<flacoste> it didn't work
<flacoste> and i want to start from scratch
<flacoste> zookeeper was never installed
<hazmat> flacoste, juju destroy-environment && juju bootstrap
<jkyle> hazmat: interesting
<bac> hi fwereade_, i have another issue with setting constraints i'd like your help on, if you have a moment.
<fwereade_> bac, I'm very sorry; I *might* be able to be with you in an hour or so :(
<bac> ok
<fwereade_> bac, if you precis it now I can answer immediately I can make it back
<bac> fwereade_: i used this script to try to deploy two related services:  http://pastebin.ubuntu.com/926506/
<bac> at line 13 i set the constraints i wanted, needed 8 cpus
<bac> everything came up, but aws console shows the second service is using type m1.small.
<bac> so it looks like the call to set constraints didn't do what i expected it to do
<bac> this was a work-around to the issue i reported earlier where i couldn't specify the mem constraint on the deploy step
<flacoste> hazmat: thx
<flacoste> SpamapS: so, are you doing a juju upload today?
<hazmat> flacoste, he's been waiting on us slackers, but yes we should be good for an upload today
<flacoste> hazmat: cool, deadline is in 3 hours :-)
<hazmat> flacoste, good to know :-)
<flacoste> hazmat: make sure your tests pass this time ;-)
<hazmat> hah
<SpamapS> flacoste: no but juju is not in main so it will come when its ready
<SpamapS> universe is still open (with release team approval) for a little while longer
<flacoste> SpamapS: ah, ok
<SpamapS> hazmat: I plan to test quite a bit before this final upload
<SpamapS> the last one broke all existing installations
<SpamapS> I suspect this one might too
<hazmat> SpamapS, we've tried to be much more careful about not doing that
<hazmat> although the subordinate work does land a change..
<hazmat> its got transparent upgrade..
<hazmat> but code version drift.. its not backwards compatible
<hazmat> that landed last week
<SpamapS> Right, so, basically, anybody who has been thoughtfully testing beta2 ..
<SpamapS> gets screwed?
<SpamapS> again
 * SpamapS is feeling hostile .. sorry
<SpamapS> hazmat: we need to start gating trunk on not breaking existing bootstrapped envs ;)
<_mup_> juju/managed-zk-client r513 committed by kapil.thangavelu@canonical.com
<_mup_> start the global settings watch before we start
<marcoceppi> How do I repair an "agent state down"
<shazzner> destroy the service and redeploy it :p
<shazzner> mysql always goes down if I forget to break a relation and destroy it's related service
<marcoceppi> weird, this happened because of high load on the machine. So I just did an add-unit then remove-unit for the downd agent
<hazmat> marcoceppi, woah.. what?
<hazmat> marcoceppi, you deployed mysql, added a relation and removed a relation, and mysql agent had an error?
<hazmat> oh.. high load
<hazmat> marcoceppi, there's a fix for high load/transient network problems on trunk should be in the upload to precise shortly
<hazmat> basically if the load gets high enough that the agent can't run for a long time, its possible that it becomes disconnected, and if the disconnection is long enough its db/zk session expires. the code in trunk to allow it to restablish a new session.
<shazzner> hazmat: awesome :)
<marcoceppi> hazmat:  gotchya
#juju 2012-04-13
<melmoth> Hola juju crowd ! i do not understand whay a service public-address actually show a private instance address http://pastebin.com/j8YpJ6DH
<melmoth> any idea about this ?
<melmoth> grumble, machine froze just after i posted here.
<melmoth> if anyone reply, i did not see it. (and now my cloud is dead too :-) )
<imbrandon> morning all
<melmoth> well, back after the freeze, i can reproduce the same "problem" agan (service public-adress seems to be actually the private one)
<melmoth> http://pastebin.com/2i3jDM0m
<melmoth> anyone understanding why mysql public adress is associated with a name i cannot resolve ? And is there a way to workaround this so i can use juju debug-hooks
<melmoth> (and try to understand what mysql did not install correctly)
<bac> hi fwereade_  -- yesterday you asked me to run 'juju -v deploy' and send you the traceback.  i've done it but no traceback is generated.
<niemeyer> Morning all
<fwereade_> heya bac, niemeyer
<fwereade_> bac, so you just get the error and no traceback? that's ...surprising :(
<fwereade_> bac, and I've been unable to repro at all
<fwereade_> bac, was your deployment initially set up with an earlier version of juju?
<fwereade_> bac, because that *should* stop you from using constraints at all (and I can't *see* any holes in the logic there...) but what you've reported sounds like it may be consistent with that
<fwereade_> bac, btw, I was wondering about the script you pastebinned... it seemed as though you were manually specifying cpu/mem from the m2.4xlarge specification; is there a particular reason you weren't just setting `instance-type=m2.4xlarge`?
<bac> fwereade_, i've dug into it a little more and found the problem was on our side.  for testing purposes we've been using a wrapper that invokes /usr/bin/juju and it was not passing the params properly.  so sorry to have drug you into this mess.
<fwereade_> bac, np at all, glad it's sorted out :)
<fwereade_> gents, I'm unaccountably exhausted, taking a short break
<niemeyer> fwereade_: Don't kill yourself main.. we need your brain
<niemeyer> s/main/man
<flacoste> any idea what the following error means:
<flacoste> juju.errors.ProviderInteractionError: Unexpected Error interacting with provider: 409 CONFLICT
<flacoste> hmm, ok, foud it
<_mup_> Bug #980855 was filed: Unhelpful error message when no nodes are available with maas provider <juju:New> < https://launchpad.net/bugs/980855 >
<robbiew> jcastro: so I guess I should ship these juju shirts somewhere today
<bkerensa> LOL
<bkerensa> :P robbiew  are you the shipping guy
<robbiew> bkerensa: man...I'm the *everything* guy it often seems for our events
<robbiew> lol
<bkerensa> robbiew: where is my juju travel mug :P
<bkerensa> ^
<bkerensa> lol
<robbiew> heh
<bkerensa> robbiew: :P I was serious... I didnt get my shirt or cup :P is it coming on the Tibet Express?
<bkerensa> takes me a month when I order stuff from Canonical Store :P
<robbiew> bkerensa: that's *all* jcastro
<bkerensa> ;p
<jcastro> bkerensa: yeah it's slow because I am shipping them all at once
<jcastro> bkerensa: I'm shipping them today it took a while to get addresses, etc.
<bkerensa> jcastro: no worries I was just trying to give robbiew a hard time
<jcastro> oh, just ask him how openstack is going, that outta do it
<bkerensa> LOL
<jcastro> <evil grin>
<bkerensa> jcastro: is he on the same team as adam_g?
<jcastro> bkerensa: he's the server team manager
<bkerensa> ahh
<bkerensa> :D
<hazmat> flacoste, is that 409 Conflict specific to that error?
<flacoste> hazmat: i think that's the API response we gave whenever we aren't able to satisfy a constraints
<flacoste> so no nodes available
<flacoste> or the requested name is not available
<jcastro> heya shazzner
<jcastro> did any of your charms make it in the store yet? Last I checked you had a few pending
<shazzner> hey
<shazzner> jcastro: not yet, I think marco mentioned promulgating them but I hadn't heard anything yet
<jcastro> k
<shazzner> the gitolite and kusabax charms
<shazzner> thanks man :)
<jcastro> m_3: here's that charm that is in github: https://bugs.launchpad.net/charms/+bug/977552
<_mup_> Bug #977552: Charm Needed: Terraria server <new-charm> <Juju Charms Collection:Confirmed for h5-chuck-9o> < https://launchpad.net/bugs/977552 >
<jcastro> https://bugs.launchpad.net/charms/+bug/966484
<_mup_> Bug #966484: New Charm: Kusaba X <new-charm> <Juju Charms Collection:Fix Committed> < https://launchpad.net/bugs/966484 >
<jcastro> https://bugs.launchpad.net/charms/+bug/964936
<_mup_> Bug #964936: Drupal Charm: superchared Drupal charm with nginx,, apc, php-fpm, all setup to scale to the moon and be Best Practices. <new-charm> <Juju Charms Collection:Fix Committed by imbrandon> < https://launchpad.net/bugs/964936 >
<jcastro> https://bugs.launchpad.net/charms/+bug/906176
<_mup_> Bug #906176: Gitolite charm <new-charm> <Juju Charms Collection:Fix Committed by shazzner> < https://launchpad.net/bugs/906176 >
<jcastro> SpamapS: m_3: marcoceppi: negronjl: all of those are in the queue ^
<jcastro> who wants to promulgate?
<SpamapS> oh look, its friday friday not thursday its friday
<jcastro> all except one are in late stages of review, easy pickins' Friday?
<jcastro> SpamapS: hmm, you and m_3 are doing precise store aren't you?
<jcastro> should we just punt these to next week?
<SpamapS> right now I'm doing "landing subordinates in the PPA and eventually precise"
<SpamapS> the last few commits of juju added a dep on a newer txzookeeper
<jcastro> SpamapS: ok so these can wait, o biggie
<jcastro> negronjl: if you're looking for something to do, page up. :)
<negronjl> jcastro: I'm already looking at all three of them ... just give me a few
<negronjl> jcastro:  It appears that I am having AWS issues :/
<jcastro> <3 thanks dude
<bkerensa> adam_g: you got a minute or two?
<adam_g> bkerensa: im here
<bkerensa> adam_g: I have a question about a former employer of yours who is making a offering :P
<bkerensa> can I PM?
<adam_g> uh ya :)
<negronjl> jcastro:  It appears that I can't get any instances out of AWS from juju ( juju deploy <anything> doesn't work ) so, the reviews will have to wait until I get that sorted out.
<negronjl> jcastro: this was working a couple of days ago .... did I miss anything ?
<jcastro> bah if only we could  get it on hp cloud
<jcastro> negronjl: I don't think so, precise?
<jcastro> are you on distro or ppa?
<negronjl> jcastro: precise ... ppa
<negronjl> jcastro: actually ... launching from precise/ppa into oneiric/ppa
<SpamapS> could be juju problems
<negronjl> SpamapS: late changes breaking things ?
<SpamapS> negronjl: quite possible
<SpamapS> the PPA is about to break stuff again.. subordinates and some txzookeeper changes landing (all good stuff.. )
<negronjl> SpamapS: thought so ... I'll just wait
<negronjl> jcastro: Reviews are on hold until juju issues are sorted out :/
<jcastro> no worries, thanks for chipping in though!
<marcoceppi> SpamapS: nooooo
<SpamapS> marcoceppi: time to lock down to a branch :)
<marcoceppi> SpamapS: yeah, going to have to
<SpamapS> marcoceppi: tho distro won't change after this next upload
<marcoceppi> SpamapS: when's the next distro upload?
<SpamapS> marcoceppi: as soon as the PPA builds and I test it... so probably monday
<SpamapS> that will be the last precise upload :)
<marcoceppi> cool, we'll just switch to distro then
<SpamapS> *awesome* thunderstorm in west LA right now
<SpamapS> like.. BOOM BOOM BOOM .. we never get these
<_mup_> Bug #981086 was filed: typo in the subordinate-services.rst documentation file <juju:New> < https://launchpad.net/bugs/981086 >
<imbrandon> SpamapS: sure its not gunshots, or aliens in the battle for LA ?
<imbrandon> hehe
<SpamapS> imbrandon: no I am used to hearing those things every few days
<imbrandon> :)
<SpamapS> Hey so people
<SpamapS> it landed quietly
<SpamapS> but subordinates is *out*
<imbrandon> oh
<SpamapS> precise and oneiric have it in the PPA
<jcastro> always on the friday
<imbrandon> heh
<SpamapS> Well it landed yesterday..
<SpamapS> but the PPA has been broken for a few days
<jcastro> "ok done with this time to relax, oh wait new feature I've wanted for six months just landed!"
<imbrandon> thats cool though, cuz i just paid my aws bill so i got a nice clean slate
<jcastro> I am glad it is here though, awesome job guys.
<imbrandon> for the weekend
<SpamapS> one disclaimer.. I have no idea how to use it
<imbrandon> SpamapS: hahah nice, i'm sure we'll all figure it out over the weekend
<SpamapS> bcsaller: ^^ whats a good starting point for using subordinates?
<imbrandon> me and jcastro  and marcoceppi are all half sick but i'm sure as hell still gonna play
<SpamapS> I think this is a game changer
<SpamapS> Its going to make things like nodejs and symfony REALLY useful
<imbrandon> SpamapS: start with the jeos build :)
<SpamapS> because you can make your app a subordinate to them
<SpamapS> imbrandon: meh.. I think thats a recipe for disaster eventually :)
<imbrandon> likely
<imbrandon> but many jeos ones will be ok
<imbrandon> one for each bigger group
<imbrandon> i think
<bcsaller> SpamapS: its pretty simple (hopefully) the spec shows it and there is an example charm 'recorder' that is subordinate. It will only see the entity its subordinate to for things like relation list
<SpamapS> I think we just need a --placement machine:4 flag to allow overriding the "unassigned" placement mode
<bcsaller> SpamapS: and status works as expected (even dot) so its easy to see whats going on
<SpamapS> bcsaller: so.. you're telling me.. this is actually up to date: https://juju.ubuntu.com/docs/subordinate-services.html .. .I don't believe you!
<SpamapS> ;)
<SpamapS> I do actually, thats awesome, thank you!
<imbrandon> spoiled with docs as soon as a feature lands
<imbrandon> back when ...... get off my lawn^Winternets
<bcsaller> SpamapS: yeah, it should be :)
<imbrandon> marcoceppi: btw i want to rename and clean that plugin up a little before its used anywhere but omg, just fyi , it will be good for the general wp one and maybe even public release but its a bit silly right now :)
<marcoceppi> SpamapS: I'd love to try that out
<SpamapS> marcoceppi: the OMG case would be to have the wordpress charm with all the generic stuff, then 'omg-wp' just have the themes and stuff.
<SpamapS> anyway, I need lunch.. then I'll start playing with it
<imbrandon> kk
<imbrandon> jcastro: http://www.youtube.com/watch?v=kfVsfOSbJY0
 * bcsaller grumbles at his wireless 
<marcoceppi> SpamapS: Right, and then be a subordinate of an nginx-proxy charm, right?
 * marcoceppi is excited
<imbrandon> yea i'm wonder if nginx and nginx-proxy should be seperate
<imbrandon> guess so
<imbrandon> whoa
<imbrandon> umm
<imbrandon> hazmat: does awesome mean no need for providers
<hazmat> imbrandon, well it lessons the need for a native ostack provider at the cost of running a proxy
<imbrandon> nice
<imbrandon> i've just seen the anouncement not read it all the way
<imbrandon> are we talking a set of programs proxy or a whole machine instance
<hazmat> imbrandon, its basically a proxy that turns ec2 api calls into openstack native calls
<hazmat> and the proxy would have to be setup to be accessible from both the client and the environment (the provisioning agent needs to be able to use it as well)
<imbrandon> ahh
<imbrandon> so a bit more to it
<imbrandon> but still
<hazmat> i still think a native openstack provider has some value
<imbrandon> yea, a lot less headache i think
<imbrandon> i got it kinda workin btw but i'm not happy at all with my python skills
<marcoceppi> How do I reslove "Relation-errors" in juju?
<marcoceppi> juju resolved doesn't seem to work
<SpamapS> marcoceppi: it should
<marcoceppi> http://paste.ubuntu.com/928475/
<marcoceppi> SpamapS It failed for the first unit-add, so I upgraded charm with the fix, removed the extra unit, re-added it. It succeeded on the new unit but the old unit still has a relation error
<marcoceppi> It's also not executing subsequent additional peer relations
<marcoceppi> Scratch that last line
<marcoceppi> I'm not sure if that's 100% true or not
<SpamapS> marcoceppi: there was at one time a problem with upgrade-charm when the state was error
<hazmat> marcoceppi, juju resolved -h
<marcoceppi> hazmat: thanks
<hazmat> marcoceppi, you have to pass the relation name, else it just tries to resolve the unit alone
<hazmat> SpamapS, that's still the default, but its possible to pass a --force flag now
<marcoceppi> sweet
<hazmat> the upgrade state check only applies to the unit's agent-state not its relations, --force will bypass that check, but also won't execute the upgrade-charm hook (else the latter would have to cope with arbitrary conditions).
<marcoceppi> Okay, so I resolved the relation, but when the hook re-runs it's not getting the relation data anymore
<hazmat> marcoceppi, what do you mean?
<hazmat> marcoceppi, it can't access data of itself or a related unit?
<marcoceppi> hazmat: when the relation re-ran it didn't seem to actually receive the data from the peer relation
<hazmat> marcoceppi, can you pastebin the relevant parts of the debug or unit log?
<marcoceppi> well, it looks like the relation re-runs when you execute juju resolved, when it did it produced the files it was supposed, only it was missing all the data exposed in the peer releation
<marcoceppi> hazmat: shoot, I just destroyed the environment. Not feeling 100% today and about to head home
<hazmat> marcoceppi, bummer. hope you feel better, feel free to ping on it whenever
<hazmat> ^me
<marcoceppi> hazmat: I have this from the charm.log on the unit that failed:
<marcoceppi> actually
<marcoceppi> wait, no it chocked on a different hook
<marcoceppi> nevermind
<imbrandon> SpamapS: your on mac hardware right ?
<imbrandon> ever seen this problem ? http://f.cl.ly/items/2N2k141d3G3Z193I2K0T/IMG_0084.JPG
<SpamapS> yes
<imbrandon> btw the normal image seems to work, the made for mac one wouldent even boot
<SpamapS> imbrandon: no, but I'm on a MBP 5,1 w/ nvidia
<SpamapS> or MBA 4,1, which is intel
<imbrandon> hrm , this has ati
<SpamapS> yeah I got no ATI
<imbrandon> but the green monitor is actually usb
<SpamapS> I don't think I'll ever buy anything but intel graphics ever again
<imbrandon> its a dvi --> usb
<SpamapS> so tired of proprietary drivers
<imbrandon> hehe :)
<imbrandon> ugh cmd t not new tab
<imbrandon> gonna have to retrain my hands
<imbrandon> yea the two hooked to the ati card are fine, the one hooked to usb is green
<imbrandon> :(
<imbrandon> anyhow install finished, let me reboot and i'll see if i cant get it fixed up, least got a native 12.04 install now
<imbrandon> so far thats the only issue, abeit a big one
<hazmat> odd just had an ec2 instance stuck in pending for 30m.
 * hazmat watches the cloud burp
<hazmat> ah.. ec2 status page notes increased errors and latencies
<SpamapS> hazmat: -> us-west-2
<SpamapS> hazmat: I've been using it a lot lately.
<hazmat> SpamapS, yup just switched out to it
<hazmat> i wish ec2-describe-regions actually had the location name as well, would make it a bit easier
<SpamapS> hazmat: I've also noticed that us-east-1 is throwing more and more instances into random AZ's where it used to mostly put them all in a
<hazmat> this destroy env waiting this is totally bogus
<hazmat> on ec2, it never works
<hazmat> we should zero out the groups perms lazily
<hazmat> instead
<m_3> totally needs a '-y' or '-f' too imo... I've got to `yes | juju destroy-environment -e...`
<marcoceppi> m_3: IMO I think if you supply a -e with destroy-environment it shouldn't prompt you yes or no, only when you run it with the default environment
<m_3> marcoceppi: I'm fine with a '-y' yes, I really wanna
<m_3> but something automatable
 * SpamapS updates juju packaging branch to require txzookeeper >= 0.9.5~
<imbrandon> dmesg
<imbrandon> ugh
<imbrandon> vi /usr/share/X11/xorg.conf.d/52-displaylink.conf
<imbrandon> jesus this hotkeys are gonna kill me
<SpamapS> nice, I just upgraded my machine 0 from 519 -> 529 w/o incident
<imbrandon> heh i get the 3rd display working and the other 2 stop
<imbrandon> ok time to call in the big dawgs and find someone in #ubuntu-x
<imbrandon> and yup, my luck, no one arround on a friday at 5
<imbrandon> :)
<SpamapS> anybody else playing with subordinate charms yet?
<m_3> SpamapS: not yet.. how's it going?
<SpamapS> interestingly
 * m_3 is just left hanging...
<SpamapS> well its not clear to me how arbitrary things can be related to other arbitrary things, as I thought they would
<SpamapS> but perhaps I should RTFM
<SpamapS> I get 'No matching endpoints'
<SpamapS> which contradicts the docs
<SpamapS> ah ok
<SpamapS> the docs aren't clear that you have to use interface: juju-info for that
<SpamapS> doh, s3 fail
<m_3> east-coast primetime... yay
<marcoceppi> SpamapS: I plan on playing with them in a few hours
<m_3> SpamapS: can't wait to experiment
<SpamapS> don't try to do munin
<SpamapS> nearly done with that
<SpamapS> :)
<imbrandon> jesus frakin christ, i feel like its 1999 and i'm having to edit an xorg file and cant get the damn thing right , and to top it off recovery console dident work because after it loads the console font it scrambles the screen
<imbrandon> fml
<SpamapS> imbrandon: I know the feeling. proprietary drivers are the suck
<imbrandon> there is a reason i use a desktop that "just works" its so my bloodbreasure isnt this high
<imbrandon> SpamapS: i have all free drivers
<imbrandon> SpamapS: displaylink drivers are open too, as well as the ati ones i have loaded, they just dont work togather
<SpamapS> ahh
<SpamapS> so its just X fail
<imbrandon> ugh i'm just gonna use 2 screens for now
<imbrandon> yea x fail
<imbrandon> i have like 3 of these adapters, they are run of the mill $30 at amazon, i have like 2 laying extra on my desk, maybe i can bribe some x hacker to make it "just work": with one
<imbrandon> i'll blog later about it
<imbrandon> pisses me off though, simple shit like this should work, i know its a common setup
<imbrandon> i see it daily
<imbrandon> its almost as bad as the big open whitespace on g+ now lol
<imbrandon> anyone else notice that ?
<SpamapS> http://ec2-23-22-28-42.compute-1.amazonaws.com/munin/
<SpamapS> woot
<imbrandon> nice
<SpamapS> lp:~clint-fewbar/charms/oneiric/munin-node/trunk
<SpamapS> feels a bit weird.. the service name on the other end is munin-node, not the real service name..
<SpamapS> so interfaces will probably have to push stuff like that through relation channels
<m_3> SpamapS: cool man
<SpamapS> ok, so , it seems to work. time for copious amounts of alcohol and sushi
<imbrandon> lol
<SpamapS> I have to say.. I looked at doing munin-node with existing chef/puppet solutions for it.
<SpamapS> they were all twice as complex
<SpamapS> the chef cookbook is basically the same..
<SpamapS> but has all the extra logic for installing on every platform under the sun
<SpamapS> ok... gone
<imbrandon> thats not nessesarily bad if done right, juju will have to grow multi platform wings someday to survive imho
<imbrandon> should be abstratced at the juju engine api though, not at the charm/cookbook
#juju 2012-04-14
<hazmat> SpamapS, sweet!
<hazmat> SpamapS, i think you meant cs:~clint-fewbar/oneiric/munin-node ;-)
<hazmat> hmm.. that brings up some interesting questions that the implementation doesn't take into account
<hazmat> namely subordinate charms from one series deployed onto a primary of a different series
<hazmat> niemeyer, ^
<SpamapS> hazmat: re the series question for subs.. it would be pretty stupid of people do that IMO. ;)
<SpamapS> hazmat: perhaps we can just trust people to make wise decisions.
<_mup_> Bug #981387 was filed: juju status does not respect JUJU_ENV <juju:New> < https://launchpad.net/bugs/981387 >
 * SpamapS feels like a mad scientist right now.. creating a puppetmaster and puppet subordinate charms
<hazmat> SpamapS, nice
<SpamapS> got certificate signing working
<SpamapS> time for sleep
<SpamapS> subordinates are so awesome
<SpamapS> bcsaller: seriously, ^5 on getting this done
<bcsaller> SpamapS: really nice to hear :)
<jkyle> I'm following the 12.04 guide on wiki.ubuntu.com/ServerTeam/MAAS/juju. It's a bit out of date, should I install from the ppa or from the repository?
<melmoth> Morning juju !
<gua> jkyle: i'm not 100% on this, but https://juju.ubuntu.com/ says if you have 11.10 then use the repos one, which might be fine for 12.04
<melmoth> Is there a way to have juju choose a given flavor/arch/ vms instances instead of picking the default one define in environment.yaml ?
<shazzner> question
<shazzner> would something like rsnapshot make a good charm?
<shazzner> or maybe an rsnapshot frontend
<shazzner> hmm
<shazzner> I suppose rsnapshot would defeat the purpose of juju, which seeks to encapsulate services
<shazzner> so you don't have to hoard over precious config files
<shazzner> irc is a great scratch pad for talking to yourself :p
<SpamapS> melmoth: yes, that is called constraints
<SpamapS> melmoth: https://juju.ubuntu.com/docs/constraints.html
<SpamapS> melmoth: you will need the latest juju from the PPA for that.
<SpamapS> shazzner: rsnapshot is still useful I think
<SpamapS> shazzner: it would be a fantastic subordinate charm.
<melmoth> hmm SpamapS any idea how can i chekc if there s an "image-type" constraint of some sort ?
<melmoth> i would like to be able to change the defautl-image-id, i see there s a ec2-instance-type constraint, but it s not the thing i want to choose
<SpamapS> melmoth: they're listed at that url
<SpamapS> melmoth: image id is not used, and default-image-id is not recommended unless you're using a private cloud, since juju will automatically look up the arch/release/region/etc. to find the official ubuntu image for you
<melmoth> what i would like is, to be able to get a vm matchin a given release/arch so that  s what i need.
<melmoth> the idea would be, you work in support, you got a core file you need to run gdb on. What you want is access to a machine with the same os version/arch and lib version as the one
<melmoth> used to generate the core
<SpamapS> melmoth: right, --constraints "arch=amd64" cs:oneiric/foo
<SpamapS> melmoth: the release is determined by the charm
<SpamapS> melmoth: so that charms can be written without lots of conditional logic for changes between releases, they're maintained as a branch per release of Ubuntu.
<melmoth> hmmm.
<melmoth> i would love to be able to call a charm with arguments.
<melmoth> or may be, should i write a on the fly charm generator script ? :)
<imbrandon> i think you might be missing some other subleties if you want to do that
<melmoth> oh, i m sure i missing a lot of juju subtelty. I only start to look at it seriously today
<imbrandon> a charm really isnt a machine, its most like a service that runs on that macine, so you arent deploying ubuntu your deploying apache , apache detaermines if it wants ubuntu or not
<imbrandon> now with your given example above, i think one step higher in the feed chain may be what your looking for, as in MAAS
<melmoth> hmm, havent looked at it right now (but i though it was to deploy bare metal boxes)
<imbrandon> or a combo of MAAS for the OS/lib versions then a charm with gdb tools etc
<imbrandon> melmoth: yes , but your wanting to manuipulate the bacre metal os features, so it makes sense, juju is service orchastration , there is a fine line, but i think your gonna end up with a combo of them
<melmoth> i need to think about it. but there sure seems to be something that wants to be use there.
<imbrandon> e.g. if i need to provision a i386 box with glibc5++ thats a MAAS task , if i need gdb with all its goodies setup thats a juju task :)
<imbrandon> but juju will happlily setup an os underneith its self
<imbrandon> if needed, just what it needs though, maybe not what you do
<melmoth> i was just thinking of having a way to have the installed box get a copy of the "customer" /etc/apt-sources.list and dpkg -l to re apt-get everything it can.
<melmoth> plus gdb and all debug symbols package on top of that
<imbrandon> melmoth: yea i mean dont get me wrong, it probably could be wrangled into doing it, just as MAAS could probably do a little orcastration
<melmoth> (wich is also why i was hoping there was a way to give arguments to a charm)
<imbrandon> i was just giving ya another piece to the puzzle :)
<melmoth> that s good, i need to look at maas anyway.
<imbrandon> and you can give it options
<imbrandon> i wouldent say arguments
<imbrandon> but charms have options
<imbrandon> juju set blah -e amazon
<imbrandon> or something like that
<imbrandon> and dont forget that maas and juju work as a team, so dont try to shove it all in one or the other either :) there is some gray area IMHO , but also i'm still very new to this too, only been charming about a month now
<melmoth> i ll need to talk about it with the support guys anyway :)
<melmoth> i think there s a beer calling for me downstair...will be back later on
<imbrandon> :)
<SpamapS> melmoth: charms can have arguments, defined by config.yaml
<SpamapS> hrm we need to move this out of drafts..
<SpamapS> https://juju.ubuntu.com/docs/drafts/service-config.html
<SpamapS> been implemented for about 9 months :)
<imbrandon> SpamapS: its so nice to be back on linux, i forget how easy it is to setup everything, like i've got my zend studio back installed, dev env's going , my ~/Projects git tree back etc etc etc , if i could just get used to the damned hotkeys again
<SpamapS> melmoth: anyway that details it
<SpamapS> imbrandon: :-D I know, I felt the same way
<SpamapS> any puppet fans, check out cs:~clint-fewbar/oneiric/puppetmaster and cs:~clint-fewbar/oneiric/puppet .. subordinates ftw :)
 * SpamapS disappears
<imbrandon> l8tr
<melmoth> thanks for the url !
<jkyle> good place to ask maas+juju questions?
<jkyle> I've got a maas server up, registered two nodes, they pxe booted. But now I'm stuck. they show as "commissioning" in the maas webui, but the docs say I have to click "accept and commission". An option that does not exist in my UI
<jkyle> https://wiki.ubuntu.com/ServerTeam/MAAS/AddNodes
<imbrandon> i dont see where it says to click accept and commision, it looks like under the heading where it says "In order to use Juju you have to Accept & Commission the nodes for use." there is a "add node"
<imbrandon> pictured
<imbrandon> jkyle: in other words your fine, they will finish commissioning hopefully and you read an extra word in there in panic :)
<jkyle> imbrandon: check the very bottom image
<jkyle> imbrandon: "Click the Accept & Commission button"
<jkyle> with image
<imbrandon> sure thats once you can click on them
<imbrandon> you have not made it that far have you ?
<jkyle> they were pxe'd last night, and my ILOM shows it's completed successfully and has a prompt
<imbrandon> ahhh
<jkyle> imbrandon: yep, that's teh step I'm at. I've added them, I have 2 nodes in a list, I pxe booted them, they've installed 12.04...but web ui still has them in "commissioning" mode
<jkyle> I suspect the network isn't set up right, but I don't know how to get shell access to the node (e.g. default username/passes)
<jkyle> tried ubuntu/ubuntu
<imbrandon> i doubt you can at this point, leaste i would hope not
<imbrandon> have you tried rebooting the webuinode ?
<imbrandon> force it to refresh
<jkyle> just restarted. no effect.
<imbrandon> fun, might poke robbie when he is around but i'm guessing its not a network issue as maas controls all that
<imbrandon> its just plain dhcp
<jkyle> imbrandon: true, but I'd wager maas makes a lot of assumptions...like a flat network. mines a bit more complicated. I tried to see what ip was served by doing a dig @localhost <hostname set in maas>
<jkyle> but it doesn't resolve
<imbrandon> jkyle: well that is likely the issue then maas controls the network
<jkyle> as it stands, I can't even delete the nodes (button grayed out).
<imbrandon> if it dont you end up with this :)
<jkyle> so, it'd be helpful to get into the node's shell and manually fix the network (and confirm that's the issue) then I can integrate some post flight scripts knowing what the problem is ;)
<jkyle> guess single usermode is always an option hehe
<imbrandon> i highkly doubt thats possible, if it is its a glaring security hole
<jkyle> the passwd hash is in the preseed. I bet dollars to donuts that the default pass is somewhere in the maas settings file
<imbrandon> really though to confim it just spin it up with  it controlling the network like it wants
<imbrandon> jkyle: i'm sure
<jkyle> imbrandon: it's not that maas is not controlling the network config, it's that my network is not a flat network
<imbrandon> right, i dont see the diffrence here, put it on its own
<imbrandon> it dont matter what your current setup is, i'm not talking about that
<imbrandon> just isolate maas and see if thats the issue
<imbrandon> trying to figure out too many variables at one time will give you grey hair, get maas working the way IT wants to, then if you want to try and intergrate it into a non standard setup cool, but you konw the ropes at that poit
<imbrandon> point*
<imbrandon> see what i'm getting at ?
<imbrandon> anyhow its the weekend and i'm the only chicken in the house atm it seems, sorry cant be much more help than that, but yea short of a MAAS dude poppin up i'd break it down to the simplest terms instead of trying to smash into a node, thats just my 0.2c
<imbrandon> should just have to unplug, reconf the ui and re-pxe boot, hour-ish
<imbrandon> pxe install dont take long
<jkyle> imbrandon: yeah, I getcha. quicker to boot to single user though than reconfigure switch and router profiles
<imbrandon> :)
<jkyle> interestingly enough, when I rebooted the node, it loaded the _old_ OS. even though it was showing a 12.04 prompt. maas must use a headless boot as a holding phase or something
<jkyle> pxe stops at: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready
<jkyle> might have to fall back to just cobbler and my own scripts for initiali orchastration
<SpamapS> jkyle: IIRC, maas works like orchestra, and they shouldn't install the OS until you've bootstrapped with juju
<jkyle> SpamapS: what I'm stuck at is the node is booted, it's in the maas interface, but it just says "commissioning" and the docs don't reflect the options I have at this point
<jkyle> e.g., it says click "Accept & Commision" button before moving forward with juju...but no such button exists
<jkyle> I'll just move forward a bit and assume it's not necessary...see if juju just "works"
<SpamapS> jkyle: perhaps the docs are out of sync w/ late changes
<jkyle> k, I'm at juju status here: https://wiki.ubuntu.com/ServerTeam/MAAS/Juju
<jkyle> juju status returns "Could not find AWS_ACCESS_KEY_ID"
<imbrandon> is your env still type ec2 ?
<jkyle> juju status -e maas returns => Invalid environment 'maas'
<imbrandon> ppa juju ?
<SpamapS> jkyle: dpkg -l juju, you need at least bzr504
<jkyle> ah, my bad, dropped an 's' on the environment conf
<jkyle> I have 504
<SpamapS> jkyle: I'd recommend trying the PPA. The version in the PPA *should* land in 12.04 before the release, and has a number of minor fixes for the maas provider.
<jkyle> alrighty, now I get: juju environment not found: is the environment bootstrapped?
<jkyle> SpamapS: I'm using the ppa version
<imbrandon> so is it bootstraped ? heh
<jkyle> well, I've followed all the directions to date
<SpamapS> jkyle: I'm guessing your environments.yaml is not quite right
<jkyle> I copied and pasted from https://wiki.ubuntu.com/ServerTeam/MAAS/Juju and replaced with the appropriate values
<imbrandon> it return success ? when you ran juju bootstrap -e maas ?
<jkyle> lemme take it from the top. I probably missed a step
<jkyle> :P
<SpamapS> jkyle: if the docs are wrong, please do suggest fixes (or just edit the wiki directly :)
<jkyle> I get this from bootstrap => Unexpected Error interacting with provider: 409 CONFLICT
<jkyle> and, whaddya know, bug with my _exact_ situation hehe: https://answers.launchpad.net/maas/+question/193473
<jkyle> I'll totally edit if i figure out what's amiss
<jkyle> are maas and juju tightly coupled?
<jkyle> that bug describes what I had yesterday, with teh start node button. but the latest maas update removes the start button. now just a grayed out delete
<SpamapS> jkyle: they should be independent, though juju is the only way to acquire/provision nodes in maas right now IIRC
<jkyle> maybe I should just fall back on cobbler and play with juju or chef
<jkyle> maas looks promising, but without docs to back it it's a guessing game
<imbrandon> true, its still only days old too :)
<jkyle> I'll be hanging with some maas folks @ the Openstack conference
<SpamapS> jkyle: the maas folks often hang in here of #juju-dev
<Daviey> jkyle: conflict usually means you don't have enough nodes, or you haven't accepted the enlistment in the webui
<Daviey> ie, not enough nodes in Ready state
<jkyle> yeah, I was talking with the package maintainer yesterday. very helpful
<jkyle> Daviey: right, problem is...there's no button to accept them ;)
<Daviey> jkyle: yes there is :)
<SpamapS> Daviey: backscroll, he's run into some confusion in the docs
<Daviey> ah
<SpamapS> jkyle: Daviey can help you :)
<imbrandon> \o/
<jkyle> https://bugs.launchpad.net/maas/+bug/981845
<_mup_> Bug #981845: "Accept & Commission" button does not exist <MAAS:New> < https://launchpad.net/bugs/981845 >
<Daviey> jkyle: Remember that MAAS isn't yet included in a supported release.. So be fair :)
 * imbrandon trails off to provision 10 servers in hp cloud and start a ndb charm maybe
<jkyle> Daviey: oh, I realize I'm on the bleeding edge of the bleeding edge
<Daviey> jkyle: ok.. i know your problem :)
<jkyle> Daviey: even as new as it is, so far it's easier to grok than crowbar when the information is found :)
<Daviey> sudo mount /var/lib/maas/ephemeral/precise/server/amd64/20120328/disk.img /mnt/ ; sudo chroot /mnt ; sudo apt-get update ; sudo apt-get install cloud-init ; exit ; sudo umount /mnt
<Daviey> restart the node.
<SpamapS> Daviey: oh I think I saw that bug.. pre-seed is messed up?
<Daviey> jkyle: super!
<Daviey> SpamapS: No.. it's a self fixing bug at release time.
<Daviey> :)
<Daviey> SpamapS: We can't make people update the daily ephemeral image on a daily basis, so we are using milestones.. The most recent milestone is beta-2, which doesn't have a recent enough cloud-init.
<Daviey> So when final is canned, it'll fix itself.
<jkyle> on a side, there could be issues with pulling images from ephemeral for some companies
<jkyle> I mean for the bootstrap phase. are there plans to pull those in locally?
<Daviey> jkyle: right.. next cycle we are going to make it so it can run entirely offline.
<SpamapS> Daviey: *ahhhhh*
<Daviey> (TBD at UDS>)
<Daviey> jkyle: it's a one time process, right.. it only imports them when you run command.  You could work around it if needed, by importing from usb media, or something
<jkyle> apt-get update fails on most of the mirrors
<SpamapS> <sigh>
<SpamapS> Expires header problems. :-P
<SpamapS> ok, back to my Saturday
<SpamapS> jkyle: good luck :)
<jkyle> thanks!
<melmoth> hmm, stuck too with https://answers.launchpad.net/maas/+question/193473 (and me too, i did not have a Accept & Commission button)
#juju 2012-04-15
<m_3> this is only install tests, but precise charms are looking good so far... http://ec2-23-20-58-1.compute-1.amazonaws.com:8080/
<SpamapS> m_3: sweeeeet
<SpamapS> m_3: we're going to need a whole new test methodology for subordinates
<SpamapS> man juju status seems way slower with the subordinate stuff
<SpamapS> lots more to look at in zk I guess
<SpamapS> m_3: http://paste.ubuntu.com/930324/ .. status looks a bit wonky.. so waiting for 'started' may be harder
<SpamapS> ok time to get back to Saturday-ing
<m_3> yeah, subordinates are a whole other world of tests
<imbrandon> marcoceppi: is the supercache still looking at memcache maybe ?
<melmoth> anybody knowing what to do when there s no "accept & commission" button ? (https://bugs.launchpad.net/maas/+bug/981845)
<_mup_> Bug #981845: "Accept & Commission" button does not exist <MAAS:Invalid> < https://launchpad.net/bugs/981845 >
<SpamapS> melmoth: I think daviey said there's a weird hacky thing you have to do...
<SpamapS> 12:49 < Daviey> sudo mount /var/lib/maas/ephemeral/precise/server/amd64/20120328/disk.img /mnt/ ; sudo chroot /mnt ;  sudo apt-get update ; sudo apt-get install cloud-init ; exit ; sudo umount /mnt
<SpamapS> 12:49 < Daviey> restart the node.
<melmoth> SpamapS, try it (i saw his post on irc), without success.
<SpamapS> melmoth: bummer
<melmoth> the "new" version of cloud init i installed in the disk.img was 0.6.3-0ubuntu1 (that s the one the repo i used had), may be it s still does not contain the fix ?
<SpamapS> cloud-init | 0.6.3-0ubuntu1 |       precise | source, all
<SpamapS> melmoth: thats the latest
<SpamapS> melmoth: after doing that did you start over?
<melmoth> yep
<SpamapS> perhaps the nodes need to be reinstalled from scratch
<melmoth> i wanted to remove node and put them back, but i can remove them , the button is greyd out and it say they are being used
<melmoth> i added new node, still no button neither
<melmoth> if i reboot the node, they are reinstalled and ends up in "ready" states, but no button....
<_mup_> Bug #982353 was filed: destroying a subordinate service after destroying its primary services results in a traceback... <juju:New> < https://launchpad.net/bugs/982353 >
<m_3> SpamapS: when you get a sec... we've got a decent picture of breakages http://ec2-23-20-58-1.compute-1.amazonaws.com:8080/
<m_3> SpamapS: most of the failing charms are either a.) failing in oneiric too, or b.) expected (hadoop,cloudfoundry)
<SpamapS> m_3: cool!
<m_3> SpamapS: I'd be fine with pulling the switch at any time... so let's just figure out what makes sense from a release perspective
<SpamapS> m_3: one question.. why do we have failing charms in oneiric?
<m_3> ha!
<m_3> good question... just haven't gotten to fixing them.  mostly package/repo barfs I think (node,nfs)
<m_3> you can see details of most oneiric charms in charmtests.markmims.com (maybe a week out of date)
<SpamapS> ok, doing weekend stuff now.. will check it out later
<m_3> cool man
<m_3> I won't push succeeding charms to precise branchs... that makes sense to do through lp
<m_3> I'll work on fixing failing ones today to make the oneiric branches work for precise
<jamespage> m_3: that reminds me that we need to sort-out which hadoop charms are going to transfer to precise
<m_3> jamespage: yup
<jamespage> and whether I should be pushing the current precise one to oneiric - it works just fine - but seems a little pointless now
<m_3> jamespage: I think it'll just be easiest to transfer all the oneiric ones and then manually remove the links for ones we want changed
<jamespage> m_3: yeah - I guess that makes sense
<jamespage> its closest to the approach we would take in the ubuntu archive
<jamespage> new release == old release; then `make changes`
<jamespage> or rm -Rf
<m_3> not sure how the lp auto-changeover scripts'll work... it might make sense to push it to oneiric first, then let the script treat it as normal
<m_3> but that also might break the script too :)
<jamespage> m_3: because the branch already exists for precise?
<m_3> right
<jamespage> OK
<m_3> we'll find out though... there're several situations like that already I think
<jamespage> I love some live testing!
<m_3> oh yeah!
<m_3> it looks like most charms are doing fine just being auto-promoted though
<m_3> s/are doing/will do/
<hazmat> g'morning
<jamespage> morning hazmat!
<m_3> mornin K
<hazmat> m_3, just testing out the new recorder, its env agnostic now, use sftp to fetch logs
<m_3> hazmat: whoohoo!
<m_3> awesome... I've been testing precise charms, but I'm itching to get back to testing against other providers
<hazmat> m_3 cool, re subordinates we just sample them against a known set of interesting other charms
<m_3> hazmat: guided planning then?
<hazmat> m_3, partly the deps for the sub are still resolved normally, but a set of known hosts services is used to associate them..  but yes effectively it is guided planning
<hazmat> m_3, technically with the default juju-info we'd have to do a pair wise construction of a sub against all other non sub charms known to pass tests to get full coverage/compat checking
<hazmat> er.. default `juju-info` relation that gets established for subs
<m_3> wanna maybe start with a --parent param to plan?  then we can expand from there?
<_mup_> Bug #982422 was filed: unable to simulate a failed hook with debug hooks. <juju:New> < https://launchpad.net/bugs/982422 >
<hazmat> m_3, i'd rather pass it via env var, it feels silly to pass a param for all charms on the basis that some may possibly use it, when most won't
<hazmat> but parameterization sounds good
<m_3> cool with either
<m_3> we'll choose something other than wordpress as the primary to attach to :)
<m_3> I guess metadata[subordinate] will be the best way to determine if we should be looking for a primary to glom on to
<m_3> didn''t see any other ways of doing it in the docs, but haven't read too carefully wrt this
<hazmat> m_3, yeah. that's the primary way to ask the question is this a subordinate charm
<melmoth> What is the meaning of the node status: "ready" "commisionning" ?
<melmoth> I still do not have the "accept and commission" button, but if i statr a node, it ends up in a ready state.
<melmoth> but still, i juju refuse to bootstrap
<hazmat> flacoste, ^
<flacoste> melmoth: there is a problem with the commissioning process as present in the archive
<flacoste> melmoth: it uses an image wich doesn't support commissioning :-(
<flacoste> melmoth: the work-around is to manually update cloud-init in the commissioning images
<flacoste> unfortunately, i don't have step-by-step instructions on how to do that
<flacoste> Daviey and smoser could provide this
<flacoste> but i'm not sure they are around
<melmoth> flacoste,  i did update cloud-init
<flacoste> ah, ok
<melmoth> https://bugs.launchpad.net/maas/+bug/981845 (see my comment 2)
<_mup_> Bug #981845: "Accept & Commission" button does not exist <MAAS:Invalid> < https://launchpad.net/bugs/981845 >
<melmoth> i even re installed from scratch and updated the cloud init thingy before adding the node , just in case.
<flacoste> melmoth: btw, if you start an instance, you cannot use it with juju as it's supposed to be available for "self-admin"
<flacoste> so you need nodes in the 'Ready' state
<flacoste> so that completed the commissioning process
<melmoth> i see them in ready state (after having bootem them up manually that is)
<flacoste> melmoth: ah
<flacoste> there is another bug
<flacoste> commissioning should shutdown the instance :-)
<flacoste> https://bugs.launchpad.net/maas/+bug/981116
<_mup_> Bug #981116: MAAS should power down the node once the commissioning process is successful <MAAS:New> < https://launchpad.net/bugs/981116 >
<flacoste> manually power them down
<flacoste> and then they should be good to go with juju
<melmoth> i do all of this within kvm (maas and nodes), am i suppose to tell maas to use virsh instead of power on lan then i guess.
<flacoste> yes
<flacoste> you should do that
<flacoste> but that still doesn't make the node power-down at the end of commissioning
<flacoste> the problem is that the node is 'Ready'
<flacoste> but still running
<flacoste> and the only way to get a node to contact the MAAS to know what to do
<flacoste> is by booting it
<flacoste> so when you juju bootstrap
<flacoste> you'll just wait and wait
<flacoste> for a node to come up
<flacoste> probably power-cycling the node that has been assigned to as the juju bootstrap node
<flacoste> would work just as well
<flacoste> but i'd simply power-down all 'Ready' nodes
<melmoth> Soo, i added 5 nodes, all in ready state, all powered off (i try with 2 powered on too, just in case)
<melmoth> juju bootstrap fail with internal server error. maas.log say http://pastebin.com/LryTtLNP
<_mup_> charmrunner/charmrunner r31 committed by kapil.foss@gmail.com
<_mup_> recorder is now provider type agnostic and can work against any specified env, update watcher for new status format
<_mup_> juju/trunk r530 committed by kapil.thangavelu@canonical.com
<_mup_> [trivial] update status to utilize JUJU_ENV if set [f=981387]
<_mup_> juju/trunk r531 committed by kapil.thangavelu@canonical.com
<_mup_> [trivial] update terminate-machine to utilize JUJU_ENV if set [f=981387]
#juju 2013-04-08
 * popey wonders if anyone is planning on looking at bug 1159020
<_mup_> Bug #1159020: SyntaxError: invalid syntax <juju:Confirmed> < https://launchpad.net/bugs/1159020 >
<mgz> popey: you need to poke someone capable of fixing it
<popey> name names! :D
<mgz> it's good to keep bugging, but... this seems python distro/lxc related, rather than juju per se
<popey> ah, good point
<mgz> so, serge or barry would be my starters
<popey> thanks
<mgz> I also want to know what's going on with it so I can announce the 0.7 release and get it in raring
<mgz> ...hm, may be worth a mailing list post
<mgz> popey: have sent an email asking for help
<popey> oh thanks!
<popey> so in the meantime I have used juju on my home server running precise..
<popey> I did the bootstrap, juju deploy etherpad-lite, then juju expose etherpad-lite
<popey> which all seemed to work well \/
<popey> er, \o/
<popey> however it's given the instance a private address on the server, how can I easily make that accessible to other machines on my network?
<popey> http://paste.ubuntu.com/5689155/  is what it looks like at the moment
<popey> hmm, guessing that's some kind of lxc bridged connection?
<popey> http://s3hh.wordpress.com/2011/05/17/lxc-with-bridged-network/ looks handy \o/
<marcoceppi_> popey: I would hope that works, but I'm not confident as I think Juju sets up it's own bridge interface during bootstrap
 * popey tries it out with a fiddle to the lxc.conf
<marcoceppi> I know I asked a little while ago, but at this point we're pretty serious about breaking charms during upgrade-charm event cycle. (just to confirm)
<jcastro> popey: if you figure that out, please document it!
<popey> k
<jcastro> m_3: heya how did the jenkins thing go?
<jcastro> you hook up with IS?
<m_3> jcastro: nope, I'm just gonna hook up with jamespage to debug it
<jcastro> marcoceppi: oh hey, you're on queue duty this week (see topic)
<marcoceppi> yup, looking at the queue now
<mgz> popey: are you up for some raring experimentation for your lxc issue?
<popey> sure mgz
<mgz> popey: okay, can you try using the branch lp:~gz/juju/lxc_env_testing locally?
<mgz> should be able to just branch and do `python setup.py install --user` then run from your ~/.local/bin
<hazmat> mgz, no need to warn, path manipulation is part of local to allow for dev envs
<mgz> I just want to see if it appears in the log, I don't think this is the right fix
<mgz> (if it's this, the lxc scripts are what need changing)
<popey> ok, so I have installed..
<popey> just cd ~/.local/bin && juju status  ?
<popey> probably need to bootstrap on this actually, not run properly yet
<popey> ok, thats better, bootstrap is working, status returns something sane
<popey> and its now deploying etherpad-lite
<popey> mgz: http://paste.ubuntu.com/5689628/
<mgz> heh, I can't code
<mgz> need to work out what the lxc tests require apart from TEST_SUDO to run
<mgz> either way, shows that you did have some PYTHON envvar breaking things
<mgz> popey: pull, setup.py install --user again and re-run that command
<mgz> then paste the same log please
<popey> k
<popey> mgz: http://paste.ubuntu.com/56899657
<popey> er
<popey> http://paste.ubuntu.com/5689657
<popey> hang on, my mess, let me start again
<popey> http://paste.ubuntu.com/5689662 mgz
<mgz> okay, so that's resolvable
<popey> super
#juju 2013-04-09
<jcastro> mramm: is there a release of core between now and 13.04?
<mramm2> yes
<mramm2> later this week I expect another one
<mramm2> and possibly one early next week
<m_3> jcastro: jokes go here
<jcastro> hey so
<jcastro> I panged bruno and patrick to clean up the django readme
<jcastro> since it's like a template right now
<jcastro> I was thinking for release week we'd have 2 big blog posts
<jcastro> one for core
<jcastro> one for charms
<m_3> ack
<jcastro> to go along with newdocs and better juju AU questions
<m_3> marcoceppi: jenkins-slave w/ juju sub works great
<marcoceppi> excellent
<m_3> I'll just now see what direction for the external relation makes sense
<m_3> i.e., config on the slave or the master
<m_3> prolly elastic-ip on the slave and then config the master
<m_3> frickin cool
<jcastro> don't forget to post your thoughts on rick's metadata thing on the list fellas
<m_3> the list is so boring
<jcastro> Deal with It(tm) :)
<m_3> :)
<marcoceppi> B)
<robbiew> lol
<m_3> marcoceppi: damn... the auth model for jenkins-master/slave really requires them to be local
<m_3> at least as far as I can see
<m_3> :(
<marcoceppi> m_3: You mean local:charm or master slave on the same machine?
<m_3> marcoceppi: same subnet / juju environment
<marcoceppi> Ugh, what?
<marcoceppi> that's pretty lame
<m_3> yeah
<m_3> well at least the way the charm's written now
<m_3> can maybe front it with apache2 configured for auth'd reverse-proxy
<m_3> actually, nm... that would probably work
<m_3> just won't be as lightweight as we discussed before
<m_3> but hey, it's juju
<m_3> so that doesnt' really matter as much
<m_3> marcoceppi: so I'll maybe extend jenkins so that the master has config to: a.) accept creds to b.) point to an "external" slave
<m_3> anyways, I'll run that by ben
<marcoceppi> I've got a relation, that essentially provides an API over http. I should probably just make it consume a http interface, because that's all it does
<marcoceppi> But during graph testing it'll fail, because not just any http interface will do
<marcoceppi> Should this become it's own interface name even though it's basically just a port + http address?
<jcastro> https://juju.ubuntu.com/community/weekly-charm-meeting/
<jcastro> thoughts?
<m_3> sure man
<marcoceppi> +1
<imbrandon> marcoceppi: touch call, but i would say yea
<m_3> maybe we can visit charm quality and audits across the store
<imbrandon> because like port 80 a lot of services run on it
<imbrandon> that are considered diff
<marcoceppi> imbrandon:  true, the interface defines the service, not so much the protocol which the service is using
<m_3> marcoceppi: I'd say leave it http... then it can be wired behind proxies and the like
<imbrandon> right, i mean its a fine line, but thats how i would see it
<m_3> the fact that it's an api is just like saying it's special because of its content
<imbrandon> hrm good point â¦ hum
<m_3> we'll need to exclude relations from some automated testing... jsut make sure they're caught in $CHARM_DIR/tests
<marcoceppi> m_3: What I'm worried about is on the flipside, the charm requires: api: http
<marcoceppi> m_3: gotchya, I think http is one of those too broad to test ones
<m_3> that's fine imo
<m_3> b/c that api server might be balanced, proxied, etc etc
<m_3> external perhaps over time
<imbrandon> m_3: btw i tried setting up a slave jenkins and charm runner ( running on OpenShift machines ) but could never get it to quite kick off fully automatic
<imbrandon> might bug ya about that sometimes $soon
<m_3> imbrandon: hmmm
<m_3> imbrandon: sure.. lemme see if I can get a standalone slave hooked up to an external master here
<imbrandon> its at ci-bluecluster.rhcloud.com:8080 , 3 node jenkins setup just for this â¦ but i got it offline for the moment ( was working on some update to WordPress on my own site this morning )
<imbrandon> i can give you full root to them in a few , if it helps
<imbrandon> its one master and 2 builders + can run smallish jobs on the master
<m_3> imbrandon: ack
<m_3> no need atm... I'm gonna run to shower before dinner
<imbrandon> got a set of vCloud machines to play with too ( 6 i think ) unmeetered that i can use to add more builders if we can parallelize(sp?) it like that
<imbrandon> np, yea i got things to finish up for the next few hours but i'll likely be doing that this evening local time
<imbrandon> no if we only had a vCloud provider â¦ :)
<imbrandon> s/no/now
<jcastro> marcoceppi: can you unpromulgate alice-irc? it's dead upstream
<jcastro> hazmat: hey what's the ballpark time for jujujcharms.com to render an updated README?
<hazmat> jcastro, 30m max o u t
<hazmat> marcoceppi, there's another branch of charmrunner
<hazmat> with notion of  building out juju versions for each test
<hazmat> marcoceppi, imbrandon, m_3 i think we need a mass change to charms to populate initial tests with rels
<marcoceppi> jcastro: unpromulgated
<marcoceppi> hazmat: which branch is that?
<jcastro> marcoceppi: ta
<hazmat> looking around for it
<jcastro> marcoceppi: also if you want to rename promulgate/unpromulgate to something more sane while SpamapS isn't looking ...
<marcoceppi> jcastro: duh, juju publish ;)
<jcastro> I was just looking for a way to mess with clint
<hazmat> marcoceppi, ~hazmat/charmrunner/wip
<hazmat> its got a little bit of cleanup, and some tests, but i realized that doing graph stuff at all is kinda of bogus long term, we should just seed some test scripts
<hazmat> digitalocean btw is awesome
<marcoceppi> hazmat: I'll take a look and merge the stuff I have done over the week
<hazmat> marcoceppi, what's your focus for it atm?
<hazmat> getting the server going?
<marcoceppi> hazmat: generating graph tests externally, making it a subordinate, streamlining and separating graph test from unit testing
<marcoceppi> giving more granular test statuses
<hazmat> marcoceppi, there's a useful script in the branch for fetching all the charms into a nice directory structure that includes ownership (ie not a straight charm repo)
<hazmat> and mods to the planner to generate plans from that structure
<hazmat> cool
<marcoceppi> arg, just realized I've been talking about charm tester not charmrinnwr
<marcoceppi> your changes will be great for integrating with charm tester
<hazmat> ah.. makes more sense now
<SpamapS> promulgate is the right word. Just because you guys don't want plain english.. ;)
<marcoceppi> SpamapS: jcastroI think we shouldmake it promulgar
<marcoceppi> Could just do "charm yo promulgo"
<SpamapS> asi es mas internacional
<ahasenack> hi guys, I just tried two charms from the charm store and both don't deploy, I guess it's my lucky day
<ahasenack> https://bugs.launchpad.net/charms/+source/etherpad-lite/+bug/1166997
<_mup_> Bug #1166997: etherpad-lite doesn't deploy <etherpad-lite (Juju Charms Collection):New> < https://launchpad.net/bugs/1166997 >
<ahasenack> https://bugs.launchpad.net/charms/+source/subway/+bug/1166989
<_mup_> Bug #1166989: Subway charm doesn't deploy on precise <subway (Juju Charms Collection):New> < https://launchpad.net/bugs/1166989 >
<ahasenack> I tried on lxc, but the error is unrelated
<ahasenack> hm, I may have messed up, default-series is precise in my environments.yaml file, but I'm running quantal
<hazmat> sidnei, re canaries and upgrade.. alternatives environment snapshots
<paraglade> is it possible to set values of keys defined in the config.xml during a hook execution?  I know how to get the values just not sure how to set them.
<marcoceppi> paraglade: You can't change the charm's configuration (or at least the values of a config-get) from within the charm itself
<paraglade> hmm ok.  marcoceppi so I am trying to build a zabbix charm, the agent (a subordinate service) can be setup to "talk" to a server via a config set or with a relationship to a server or proxy.  I am kinda struggling on trying to implement this.  I looked at other charms but nothing really seems to model this setup exactly.
<marcoceppi> paraglade: Zabbix! Very exciting. What it sounds like you're tying to do is get the "master" server which can either be a relation or a configuration value, correct?
<paraglade> yes thats correct
<marcoceppi> Cool, there are a few ways to tackle this. You'll have to forgive me, I ususally use Nagios so I'm not sure the inner-workings of Zabbix, but if you can only designate one "master" then I would just choose a pecking order of which is more important. User defined configuration or relation data.
<marcoceppi> In my opinion configuration data takes precendence over relation data. So I would have the relation hook check if the configuration value exists. If it doesn't then use the data from the relation, otherwise just exit silently.
<marcoceppi> It gets a bit more complicated if you can have multiple "master" monitoring services. In that case I would create a file in the charms root that gets updated by the hooks respectively. So it would just add or subtract values depending on if configuration was changed or if a relation was changed
<paraglade> ok thanks.  I think the configuration over relation data will actually work well with this charm(s).
<marcoceppi> paraglade: As long as you clearly document that in the README (and unless someone else corrects me) that should be sufficient
<paraglade> marcoceppi: if you donât mind I think I have another question that maybe you can give some wisdom on.  I was thinking when a *-relation-changed *-relation-join event fired I would just call config-changed hook in those event hook scripts.  The idea is that I would only need to create the config implation in one script file.  However it seems that I am able to get the relation data just the config data
<paraglade> when I do this.
<paraglade> any suggestions on a
<paraglade> clean way to handle this?
<marcoceppi> That sounds like a great idea! I know a few charms that do this already. The problem with relation data is the environment variables that exist in order for the relation-get to work are only loaded when the relation is actioned upon. So you can't just call relation-get in any other hooks without supplying which _exact_ relation you're referring to. This gets messy when trying to track what relation_id goes where. What I usually recommend is to
<marcoceppi> just persist relation data to files and have other hooks look in those files for their respective values
<marcoceppi> Here's a small example:
<marcoceppi> https://gist.github.com/marcoceppi/5349842
<marcoceppi> In this case you can just replace update_configuration with executing hooks/config-changed
<AskUbuntu> Does my charm need both a relation-changed and relation-joined hook? | http://askubuntu.com/q/279706
<sarnold> marcoceppi: these are cool snippets, I'd hate to lose them.. are they documented anywhere else? or implemented anywhere? :)
<marcoceppi> sarnold: I was actually self documenting paraglade's question on to Ask Ubuntu to help keep them alive
<marcoceppi> Hopefully help fill up Google's juju/charming questions with better information
<sarnold> marcoceppi: oh! excellent :)
#juju 2013-04-10
<stub> Can anyone help with an lxc environment issue? I can bootstrap fine, but 'juju ssh 0' is prompting for a password. I have no problems with openstack.
<m_3> stub: try `juju ssh posgresql/0` instead... machine-id's a little wonky with the local provider
<m_3> ok, maybe with a 't' in there :)
<m_3> just `ssh ubuntu@10.0.3.xxx` should work ok too... just not using the machine-id
<stub> m_3: So I can't use the unit name, because deploy is leaving the unit stuck in a 'pending' state. juju status isn't giving me an ip address either -         public-address: null
<stub> So my guess would be the lxc container's network is borked, except that commands like 'juju status' and 'juju debug-log' are working...
<stub> hmm.... debug-log showing me nasty encoding failures, and 2.7 python paths. Which leads me to an open bug about lxc using python3 and exploding because juju is python2 and has PYTHONPATH set.
<stub> So I think I'm on openstack until that is resolved ;)
<m_3> stub: yeah, iirc raring lxc is still unhappy with juju
<stub> So lemme just set up a precise instance using lxc.... and.... umm....
<m_3> think the python2/3 issue isn't there in other series (just was working with precise local provider this morning)
<stub> Does lxc inside lxc actually work, or does the universe implode?
<m_3> yikes... haven't tried that one... I do lxc in kvm
<m_3> actually have a precise ec2 instance spun up where I was working with precise lxc just now
<stub> You people with your fast ping times...
<m_3> in a novotel atm :)
<m_3> stub: please lemme know if lxc is inceptable
<m_3> that's probably a big ole rabbit hole though
<stub> Yeah, I don't think I can justify wasting time on it ;)
<m_3> ack
<stub> m_3: Do you recall if the PostgreSQL charm always required the database name to be set from the client end? http://jujucharms.com/charms/oneiric/oops-tools/hooks/db-relation-changed seems to expect it to be set by the server
<m_3> stub: yes, iirc, the sequence is that the postgresql service creates a db w/ creds during relation-joined
<m_3> then relation-sets that for the other side to you
<m_3> it creates the db name based on the remove service name
<m_3> that way multiple units of the same service get the same db
<stub> m_3: So that would mean there is a bug in http://jujucharms.com/charms/precise/postgresql/hooks/hooks.py around line 1760?
<m_3> lemme look
<stub> oic.
<stub> What is the syntax to set the database name from the command line then? juju set ... ?
<m_3> yes
<m_3> looking to see if there's a config param for that
<m_3> there doesn't look to be one
<m_3> and there shouldn't
<m_3> whoops, I thought that was working on there
<m_3> the sequence with a client is:
<stub> So my understanding: there is a bug in the postgresql charm. At the moment it aborts if the database name isn't set on the relation. Instead, it is supposed to create a database based on the remote service name and inform the client.
<m_3> juju deploy postgresql && juju deploy wordpress && juju add-relation wordpress postgresql
<m_3> yes, that sounds right
<m_3> so wordpress calls db-relation-joined which should be a no-op
<m_3> postgresql calls db-relation-joined which creates database named "wordpress" and makes up user creds
<m_3> does `relation-set` with all that info
<m_3> wordpress calls db-relation-changed
<m_3> if postgresql has sent the info, it uses it... otherwise, exits cleanly and waits to be called again
<m_3> postgresql calls db-relation-changed which should be a no-ip
<m_3> s/no-ip/no-op/ (that's my dns provider so's in muscle mem)
<m_3> dang, I didn't realize that's been broken
<stub> I'm writing my first client charm (so I can write tests) and was wondering why things were odd.
<m_3> the postgresql's db-relation-joined hook needs to not _create_ a new db if it's already been created... it just sends the auth info back down the pipe
<m_3> stub: that way multiple client units get the same database within the postgresql service
<stub> right. Do you want me to fix the charm or do you want to repair it?
<m_3> so if you want multiple clients talking to _different_ dbs... then the clients should have different service names
<m_3> juju deploy wordpress blog1 && juju deploy wordpress blog2
<m_3> then when those're related, we'd have two different dbs in pg
<m_3> stub: It'd probably be better for you to do it
<m_3> but I'm happy to if you'd rather
<stub> I'll give it ago. I need it working so my charm client works ;)
<m_3> thanks... wow, sorry didn't realize in review that that process was borked
<stub> m_3: I suspect there is a charm somewhere relying on this behavior - specifying the database name to use
<m_3> oh?  :(
<m_3> we'll have to have a look at pgbouncer and friends too
<m_3> hmmm.... wonder if it's worth trying to support either behavior
<stub> I think we can have both behaviors, although we may end up with a spurious database if postgresql-charm creates a db then client-charm sets the database name
<m_3> if we moved pg's creation logic into relation-changed
<m_3> there's a bit of a problem there... ambiguity on what it means when the client is "silent"
<m_3> does it mean the that client hasn't finished coming up yet?
<m_3> or that it wants you to pick the db name for it?
<stub> http://jujucharms.com/charms/precise/oops-tools/hooks/db-relation-joined
<m_3> crap
<stub> Cause there will be several different services needing to share the database.
<m_3> right... that's still totally cool
<m_3> based on service_name
<m_3> new services get new dbs
<stub> Different services with different names needing to share the database (because they need to share data)
<m_3> oh oh, gotcha
<stub> It is either that, or we require messing around with pgbouncer proxies and aliases and that seems more painful.
<m_3> lemme look at oops... maybe we just need to support both behaviours
<stub> So I'll see if I can support both uses.
<m_3> the kicker is that we'll probably need to make mysql do the same thing
<m_3> mysql does the 'db decides'
<stub> Nah, mysql is for toy services ;)
<m_3> haha
<m_3> well I'm also wondering if we've copied that paradigm for mongodb, cassandra, etc
<m_3> important to have them consistent
<m_3> but supporting both behaviors seems worth trying
<stub> I don't know if mongodb or cassandra have a similar way of partitioning data like PostgreSQL and its separate databases.
<m_3> see if you can figure out how to differentiate "defaults" -vs- "relation-guard"
<m_3> yeah, don't know if they create default keyspaces or leave that up to the client
<m_3> relation-guard is what I call the "is the other side up yet?  go on... otherwise exit0"
<m_3> remember changed keeps getting called as long as either side keeps doing relation-set
<stub> Right. So pg's relation-changed and relation-joined are the same. So in this hook, if the database name isn't provided we generate it. If the database doesn't exist, we create it.
<stub> And extra points for cleanup - if we generated a database that is not used by any active relations, nuke it.
<stub> But a spurious empty database does no harm really.
<m_3> we probably need to remove the "joined"
<m_3> if it's reading anything from the other side, then that's really best done in changed exclusively
<m_3> joined really should never try to read (it's only fired once)
<m_3> either making it a no-op in code (exit 0) or just removing the hook (link) is equiv
<stub> It needs a -joined hook to generate the default database in case the remote end doesn't want to specify it.
<stub> So -joined generates a default database and -changed allows the client to override it.
<m_3> oh, ack
<m_3> yeah, that's probably workable... check the same "database" relation var
<m_3> cool... I'll let you proceed
<m_3> thanks man... nice catch
<m_3> stub, m_3: just noting we need test cases for multiple client services _and_ multiple units of the same... using defaults as well as overriding db name
<stub> ack
<stub> This schema_user thing seems bogus too. A magic user is created with superpowers on the database, so every client using the charm gets superpowers on the database they connect to destroying any security you bother to set up with the normal user.
<mgz> jamespage: I want review on mine and Mark's packaging updates to juju, should I just propose my branch for merging into raring? the catch is the history was kinda screwy so the mp diff will... not be useful
<jamespage> mgz, yeah - thats OK
<jamespage> or just pass me the branch
<mgz> jamespage: have done everything and proposed, see lp:~gz/ubuntu/raring/juju/0.7 and mp
<m_3> ugh... /me hates userpas
<mgz> m_3: ^you probably also want to have a look at that branch, to see if I brought your changes in correctly
<jamespage> mgz, m_3: some comment/fixes require - I'll doc them in the MP
<mgz> jamespage: thanks!
<m_3> jamespage: mgz: cool... hoping there's a better way than sed to get the version from rules into the postinst/prerm files
<mgz> gah, and these test failures are new to me...
<jamespage> m_3, ah - right
<jamespage> thant answers one of my questions
<jamespage> not that is not a great way todo it
<m_3> mgz: yeah, I guess my use of 'DEB_BUILD_OPTIONS=nocheck' is a really bad habit
<mgz> this is just something in my env, these ones were passing earlier...
<mgz> not the most robust suite in the world
<m_3> ack
<mgz> okay, the branch is slightly borked...
<jamespage> mgz, I had trouble getting the orig.tar.gz as well
<mgz> gah, quilt, I hate you
<jamespage> comments in the MP
<jamespage> https://code.launchpad.net/~gz/ubuntu/raring/juju/0.7/+merge/158088
<mgz> yeah, the rule for getting the tarball is borked... but it has been forever
<mgz> and I didn't want to fix
<mgz> thanks, those comments are useful
<m_3> jamespage: yup, thanks
<jamespage> m_3, mgz: if you do want to name the package juju-0.7 then we need to deal with transitions from juju -> juju-0.7
<m_3> the prerm 'rm' might not be necessary with '--compile' in the rules... purge wasn't cleaning out that directory for some reason
<m_3> jamespage: yeah, I guess I don't understand exactly how an upgrade would happen then... I thought it would remove old and install new, which should be fine with /usr/lib/juju-0.6 and /usr/lib/juju-0.7
<mgz> jamespage: the plan is to add a meta-package I think
<jamespage> m_3, for the files yes they would get automatically upgraded; but the alternatives will break
<mgz> pushing a fix for the issue I noticed
<jamespage> actually forget that - you use a prerm to remove the older version alternatives
<jamespage> so its OK I think
<jamespage> m_3: sorry - being dumb
<m_3> jamespage: dude, I'm just barely caught up on this one... running through upgrade scenarios
<jamespage> mgz, m_3: did I say to merge the 0.7 changelog entries?
<mgz> jamespage: nope. I can do that
<mgz> was a little confused as to the right thing, with ppas in the mix as well.
<jamespage> mgz, ignore PPA changelog entries - only distro ones are relevant
<m_3> jamespage: I assume '$< >' was an accidental autocomplete and not something strange required by dh?
<jamespage> m_3: thats makefile magic
<m_3> oh, nm... it's rhs and then redirect to file
<m_3> doh
<jamespage> $< is read-in dependency (in this case XX.in) and write out >
<m_3> right
<m_3> wasn't expecting the write out... my bad
<m_3> wht a difference an '-i' makes :)
<mgz> have pushed the changelog fix
<mgz> m_3: are you working on the alternatives changes? you can either base off my branch, or I can pull in your parts when you're ready
<m_3> mgz: I based off of your branch... pushing them one at a time to ~mark-mims/ubuntu/raring/juju/0.7
<m_3> mgz: figure you can merge them and resubmit your branch
<m_3> jamespage: mgz: any ideas why this won't purge cleanly unless the 'rm -Rf /usr/lib/python2.7/dist-packages/juju' is there?
<jamespage> m_3 what error do you get?
<m_3> take a bit to reproduce, but essentially it was a handful of "won't remove directory because it's not empy"
<m_3> I sort of assumed it was .pyc files at the time, but that's not nec true
<m_3> I'm sure it's cause I'm overriding dh_auto_install with
<m_3> 'set -e && python setup.py install --root=debian/juju --install-layout=deb --install-scripts=/usr/lib/juju-$(VER)/bin --compile'
<m_3> distutils magic that I'm missing by calling it manually
<m_3> mgz: kicking off pbuilder to test the postinst/prerm changes
<m_3> mgz: quilt's failing on that zookeeper patch
<m_3> 0.6 still debuild's fine though
<jamespage> mgz, m_3: you guys know about bzr merge-upstream right?
<m_3> jamespage: um.... no
<m_3> googling now
<m_3> :)
<m_3> I was tarring
<mgz> jamespage: when the branch isn't horribly broken, yes
<jamespage> mgz, is the last 0.6 update in the branch good?
<mgz> it should work for any future upstream merges... but I needed to do some hacks
<mgz> jamespage: nope. I could have based on the new minor 0.6.1 that I also fixed up, but really that's diverged
<m_3> mgz: so all changes _other_ than manpages are in... just having problems getting past the borking patch atm
<m_3> so they're untested
<mgz> m_3: what is breaking for you exactly atm? we should possibly just add some more test skips for now.
<m_3> mgz: http://paste.ubuntu.com/5695537/
<m_3> from `debuild -us -uc -S`
<mgz> ...internal error? let me try that locally. perhaps just refreshing the patch will help.
<m_3> I'll clear and re-pull
<m_3> damn that repo's big
<mgz> I've been using `bzr builddeb -- -nc -us -uc` which works +- failing tests
<m_3> crap... same thing with just your branch
<m_3> juju-0.6 does fine though
<m_3> ok, well lemme try with bzr bd
<m_3> no love
<m_3> got a call.. biab
<mgz> m_3: I get a failure on your branch, but not mine, will see if I can fix
<m_3> mgz: sorry, I don't relly know what's going on with .pc
<m_3> I assume it's where it's applying the patches or something
<mgz> yeah, that's all quilt related
<mgz> m_3: what I get <http://paste.ubuntu.com/5695604/>
<m_3> mgz: yikes
<mgz> m_3: try `quilt pop -a` then `quilt push -a`
<m_3> mgz: ok, so I've gotta get some other bits into the rules
<m_3> dang... did the pop and push and it's still failing
<mgz> m_3: so, with your commit of the .in files and one more fix, I get a build package: lp:~gz/ubuntu/raring/juju/mims
<m_3> mgz: oh, whew... that error was when the .in files were missing
<m_3> mgz: good, was scared by that error otherwise
<mgz> I'll fixup that and put it in my branch. what have we got left, the man pages?
<m_3> mgz: yeah, I wanted to test the .in changes
<m_3> mgz: and then figure out how to do the manpages
<m_3> no clue how to version them for alternatives though
 * m_3 can look at java or something
<m_3> mgz: I was wondering what vim's problem was with the syntax on rules... good catch
<mgz> m_3: those fixes are now brought across to lp:~gz/ubuntu/raring/juju/0.7
<m_3> mgz: oh, great... I'll rebase
<m_3> thanks!
<mgz> (cherrypicked rather than merged so the fileids stayed the same)
<m_3> ack
<m_3> mgz: argh... no love... the new branch is still doing the same thing for me
<m_3> I give up trying to build it
<m_3> lemme quit screwing with that and see if I can get manpages versioned
<mgz> you can just `rm -rf .pc` if that helps
<mgz> easy enough to get back the state with `bzr revert .pc` after if you're not actually touching the patches
<m_3> still same thing w/o the .pc file /me doesn't know wtf
<mgz> hm, I wonder if it's actually the dsc that's borked. try removing that and the -nc flag if you're still using it?
<m_3> yeah, I just cleaned everything up
<m_3> noted that I had some 0.6 stuff in the higher directory
<m_3> mgz: hey, should I cut the .pc stuff out of the upstream tarball?
<m_3> I'm just creating it from the repo directly w/ no changes
<mgz> it certainly shouldn't be in the upstream tarball
<m_3> ah... ok
<mgz> the upstream tarball is what you get from lp:juju/0.7 and bzr export
 * m_3 facepalm... sorry man
<mgz> it's okay, the tooling for this is slightly borked in the branch as jamespage mentioned earlier...
<m_3> now that makes sense as to why quilt wasn't happy
<m_3> it puts patches on _top_ of the upstream tarball
<mgz> I'll have another quick look at resolving that
<m_3> mgz: whoohooo!
<m_3> that worked
<m_3> ok, wife's waiting on me for dinner... I'll test out alternatives chagnes, figure out how to get manpages in
<m_3> have a branch for you in the morning?
<imbrandon_> robbiew: arnt you in Austin? I hear its the next city to get Google Fiber ;)
<robbiew> yeeeeeep!
<mgz> m_3: sounds great
<m_3> mgz: might be able to get it sooner... lemme see how long dinner goes
<m_3> whew
<m_3> now I can build again :)
<m_3> ok, I'm out for abit
<imbrandon_> robbiew: I'm telling ya now, goto the http://google.com/fiber page and put your address in for pre-registration , you'll be glad you did â¦ it filled here FAST
<imbrandon_> heh
<imbrandon_> robbiew: yup, here is the exact link, and Austin is now listed â¦ https://fiber.google.com/cities/#header=check
<robbiew> imbrandon_: already did
<imbrandon_> :)
<mgz> m_3: fixed the get-orig-source rule
<mattgriffin> jcastro, arosales: i've got some questions about the Charm Store but need to get off of the hangout and run to lunch. i'll ping you later today. thanks!
<jcastro> ok
<arosales> mattgriffin: ok, thanks for joining us.
<niemeyer> % juju publish cs:~niemeyer/precise/build-mongo
<niemeyer> cs:~niemeyer/precise/build-mongo-0
<niemeyer> !
<arosales> jcastro: thanks for sending out the charm meeting minutes, nice summary
<jcastro> yeah it turned out not as crappy as I thought
<jcastro> I'll make them more interesting once I get used to it
<jadams> anyone here played with the openstack packages?  I'm a bit confused on the 'quantum' networking stuff
<mattgriffin> jcastro, arosales: ping â¦ got some questions about the Charm Store. have a sec?
<arosales> mattgriffin: hello, sure
<arosales> mattgriffin: sorry the delayed response. I was grabbing some lunch.
<mattgriffin> arosales: np. are there any plans to support some kind of a feedback loop for Charms usage - either quantitative or qualitative - back to the author?
<arosales> mattgriffin: besides a bug report a method for continuos feedback
<arosales> mattgriffin: there is https://juju.ubuntu.com/docs/charm-quality.html that we are working on "rating" for each charm
<arosales> but that isn't really continuous feedback, and mostly from the LP charmers group and not from users.
<arosales> mattgriffin: aside from bugs were you thinking of comments on the charm page, or a forum, or something else?
<arosales> mattgriffin: also the metadata.yaml does have the authors contact info, at least for charms that have maintainers . . .
<arosales> mattgriffin: in terms of quantitative feedback we are also working on display the charm "usage" or download statistics from the charm store.
<mattgriffin> arosales: yeahâ¦ feedback could be interest measured by downloads (not really possible), ratings from Charm Store managers (your link), +1's/likes by users, qualitative feedback/praise/reviews by users, and an easy way to submit bugs
<mattgriffin> arosales: so the purpose would be to help Charm browsers make better decisions and also help Charm authors with feedback on interest in their Charm and ideas to make it better
<mattgriffin> arosales: ahâ¦ download stats â¦ very interesting
<mattgriffin> arosales: when might the quantitative feedback measures be available?
<mattgriffin> arosales: stats i mean
<arosales> mattgriffin: hopefully by 13.04, so soon
<arosales> mattgriffin: I like your ideas on feedback
<arosales> shows in the charm page
<arosales> what ways did you envision, "feedback/praise/reviews by users"
<arosales> comments? forums?
<arosales> s/shows/shown/
<mattgriffin> arosales: yeah. something similar to consumer app store feedback - short and sweet â¦ and maybe easily tweet.
<arosales> mattgriffin: ya so we need some more social networking integration into the charm page.
<mattgriffin> arosales: perhaps be able to leverage some of the reviews infrastructure (or learnings) from USC
 * arosales makes note for a blueprint for "adding social networking to Charm Store charm pages."
<arosales> mattgriffin: good point on USC
<arosales> mattgriffin: aside from download stats any other quantitative information you would be interested in?
 * mattgriffin thinks
<mattgriffin> arosales: some indication of the age or update frequency might be helpful in selecting a charmâ¦ i don't know â¦ might be reaching on that one
<arosales> mattgriffin: perhaps recent commits, and age in the charm store
<mattgriffin> arosales: :) cool
<arosales> mattgriffin: just some ideas on information that is there, but may be interesting to users . .
<arosales> mattgriffin: good stuff, I think there is some good ideas there for a blueprint.
<mattgriffin> :)
<arosales> mattgriffin: please add any ideas that come up into  https://blueprints.launchpad.net/ubuntu/+spec/servercloud-1305-juju-charmstore-feedback-loops
<arosales> mattgriffin: be good to see virtually see you at http://summit.ubuntu.com/uds-1305/ too
<mattgriffin> arosales: cool. will do
#juju 2013-04-11
<imbrandon_> you could probably draw some pretty successful conclusions on what info would be useful by mimicking the info found in say the Druapl plugin browser or the Wordpress Plugins Codex or the NPM Web registry for NodeJS modules , they have all successfully solved it and i'm sure what we should be adding is very similar I'm guessing. Just my 0.2c
<m_3> jamespage: when you get a chance, can you see if http://paste.ubuntu.com/5697795/ has any hope of working wrt manpages?
<m_3> followed up by the corresponding update-alternatives slaves in postinst
<m_3> just don't know the best way to get the manpages built into /usr/lib/juju-xxx/man during install
 * m_3 pbuilding it to test now
<jamespage> m_3: that looks like what openjdk does
<jamespage> so its prob OK!
<m_3> ack... thanks!
<m_3> mgz: got a sec?
<m_3> nm... I see you're involved in other stuff
<mgz> m_3: I do
<m_3> just having a hell of a time moving forward with the alternatives for the manpages and wanted a sanity check
<m_3> every time I go to test, the _versioned_ manpages were never installed
<m_3> mgz: I'm trying for something like http://paste.ubuntu.com/5698017/ (also updated in my lp:~mark-mims/ubuntu/raring/juju/0.7 branch
<m_3> I want them installed as _files_ and then I'll update-alternatives to install the actual man pages in postinst
<mgz> hm...
<m_3> but /usr/lib/juju-0.7/man is never created
 * m_3 being thick
<m_3> mgz: oh, wait... it just worked
<mgz> woho :)
<m_3> yeah, nice /usr/lib/juju-0.7/man/man1/xxx.1
<m_3> now I'll just add more slaves
<m_3> mgz: ok, thanks... mp shortly
<m_3> I guess it's only appropriate that the packaging for "juju" be black magic
<m_3> mgz: ok, that latest branch seems to work with manpages... I just noticed we should do alternatives for bash completion too... but shhhhhhh
<m_3> I'll get on that one after I update my juju-2.0 packaging branch to reflect the changes we just made here
<m_3> I think we've covered all of james' comments
 * m_3 biab... more coffee
<m_3> awesome, the purge is even clean now... no cruft left in /usr/lib/python2.7/dist-pacakges/juju
<m_3> mgz: hey, ok... so bash_completion is versioned in my latest branch
<m_3> testing upgrades from 0.6 -> 0.7 and then also installing 2.0
<m_3> mgz: also updated lp:~mark-mims/juju-core/packaging-with-alternatives/ to use juju-core.{postinst,prerm}.in
<mgz> excellent, I'm just doing ffe bugs now...
<m_3> dang... I wish we could take this opportunity to change the control file
<m_3> change some recommends stuff to suggests
 * m_3 really hates accidentally install mesa and shit in test containers
<jcastro> hey guys
<jcastro> so thumper sent me a mail last night
<jcastro> and says he can implement one of my ides
<jcastro> and I told him to bring it up to the list
<jcastro> but basically
<jcastro> I wanted a `juju switch $environment` command
<jcastro> so like
<jcastro> juju switch production
<jcastro> juju switch local-dev-laptop
<jcastro> juju switch testing
<jcastro> and so on
<jcastro> and juju status would spit back an environment name too
<jcastro> so you could do stuff and switch environments without dealing with environment variables or juan's epic dotfile collection
<m_3> mgz: ok, so tips of lp:~mark-mims/juju-core/packaging-with-alternatives/ and lp:~mark-mims/ubuntu/raring/juju/0.7/ play really nicely together...
<m_3> mgz: I testing upgrading 0.6 -> 0.7 and then also installing 2.0.... that worked beautifully
<m_3> not sure what'll happen with going straight from 0.6 -> 2.0.... probably breakage
 * m_3 trying that now
<m_3> jcastro: sure, sounds good
 * m_3 fantasizes about `jujush [env] > `
<m_3> maybe just `jush[env]>`
<m_3> shorter
<m_3> mgz: can we change the control files so that juju-core-1.9+ can live with juju-0.7+, but _conflicts_ with juju-0.6 and lower?
<mgz> hm...
<m_3> I'd like to force an upgrade to 0.7 before folks try to co-install juju and juju-core
<marcoceppi> jcastro: I like the idea
<marcoceppi> I feel like it could be dangerous with juju destroy-environment (I know it warns, but still)
<m_3> mgz: that kinda makes sense
<mgz> yeah, we can do this
<mgz> can do a conflicts on juju < 0.7... have the new juju0.7 package, and a juju metapackage with a higher version...
<mgz> or something along those lines, bugging jamespage now
<m_3> I think if you add juju-core to juju-0.6 and lower, then goju will just have its way with it... testing this now
<mgz> apparently we want breaks and replaces instead
<m_3> mgz: yeah, addnign 2.0 on top of 0.6 is not good... ends with a mix of the two
<m_3> mgz: anyways... both of those branches are ready for another round of review
 * m_3 back to scale testing
<mgz> ace, will bring in, and push up
<m_3> jcastro: btw... that's a perfect example (just like `juju publish`) for a command line plugin
<m_3> i.e., stick a 'juju-switch' somewhere in your path (like $HOME/bin) and juju will catch it as a subcommand `juju switch`
 * m_3 still wants that... bad
<jcastro> m_3: yeah
<jcastro> hey
<jcastro> epic topic?
<m_3> yup
<jcastro> add it!
<m_3> ack
<m_3> there're _sooo_ many things we use that are natural plugins
<jcastro> mgz: heya, http://www.rackspace.com/blog/ramping-up-our-openstack-investment-involvement/
<m_3> mgz: btw, '--force' can safely be removed from 1.9.13's postinst, but has to be there in 0.7 to get a clean upgrade from 0.6->0.7
<jcastro> "While  we believe some variation in implementations will be inevitable, we do want to eliminate as many of these as possible to provide as much of a common OpenStack experience as we can."
<jcastro> this should make supporting rackspace easier I hope?
<mgz> jcastro: that sounds like a big improvement
<mgz> ...horrible debian packaging hack I'm not committing... http://paste.ubuntu.com/5698591/
<mgz> jcastro: I don't see a mention of security groups, but for go juju at least there's a... sort of workaround (just don't use them)
<mgz> console log and keypairs I'll personally welcome though, the undebuggability was what made me give up when I tried hacking something in
<jcastro> mgz: I'm more inferring the desire to not be different
<jcastro> like as an overalll goal
<mgz> yeah, it's a good thing
<m_3> mgz: nice... I've been hoping someone would pipe up with standards
<m_3> just don't know them
<SpamapS> mgz: what has been the challenge with supporting rackspace btw?
<SpamapS> IIRC, they listed multiple endpoints for things in the catalog that confused our client libs?
<SpamapS> m_3: there are standards.. but there are also LOTS of extensions (fully compliant w/ said standards) ;)
<mgz> SpamapS: what I wrote down from when I poked it: <http://paste.ubuntu.com/5698755/>
<SpamapS> mgz: for the security groups, seems like you can just bypass that in juju (as in, if exists(securitygroupsextension)...)
<SpamapS> mgz: the other bits... WTF
<mgz> right, I was prepared to hack around security groups somehow, but... got stuck on everything else when none of my normal debug tooling was working :)
<m_3> sonne: yeah, no joke... that's been the hardest part... soooo many ways to do it all
<m_3> sonne: sorry... wrong completion
<m_3> SpamapS: ^
<m_3> mgz: you need anything updated for the morning?  I'm gonna turn into a pumpkin in a sec
<mgz> m_3: I think we're good, but if you could follow up on anything tomorrow while I'm travelling that would be great
<mgz> I'll send and email with the current state of things tonight so everyone knows where we're at
<m_3> mgz: ack...  I'm flying tomorrow afternoon UTC too
<m_3> mgz: thanks!
<marcoceppi> jcastro: is that juju_icons_source.svg anywhere?
<jcastro> that is an excellent question?
<jcastro> can you reply on list? I have jovan following along
<marcoceppi> It would be great once this becomes policy to have the templated dropped in the charm root during charm create
<jcastro> indeed
<jcastro> that is quite clever
<jcastro> <-- lunch
 * koolhead17 looking for other Juju guy from Portland other than adam_g 
<mgz> koolhead17: you will not be disappointed
<mgz> at least next week only :)
<koolhead17> mgz, hehe
<koolhead17> adam_g, hola there. looking forward to see you sir :)
<ahasenack> hi guys, I'm wondering if there is a way to inject a proxy configuration into juju-deployed units, even for the bits that juju itself needs to install before handing over to my install hook
<SpamapS> ahasenack: https://bugs.launchpad.net/juju/+bug/897645
<_mup_> Bug #897645: juju should support an apt proxy or alternate mirror for private clouds <cloud-init:Fix Released> <The Eilt project:New> <juju:Confirmed> <juju-core:Confirmed> < https://launchpad.net/bugs/897645 >
<ahasenack> SpamapS: yeah, just found it
<ahasenack> SpamapS: so no dice?
<ahasenack> hazmat: around? Is there no way to inject a proxy configuration into the juju environment with pyjuju? Something that juju would use even before the install hook is called?
<SpamapS> ahasenack: custom image.. :)
<ahasenack> heh
#juju 2013-04-12
<imbrandon_> or hack juju's cloud-init injection to add your stuff as well
<imbrandon_> but its hard coded in juju so it would be a hack ( i did it on hp cloud back when i setup the first set of archive mirrors over there )
<imbrandon_> ahasenack: ^^
<imbrandon_> iirc it was not that nor that big of a change , just the fact of doing it tho felt dirty , heh
<ahasenack> imbrandon_: yep, see it
<imbrandon_> since cloud-init takes base64 encoded filecontents as a valid script i've long thought that a setting in env.y of like "init-extra: somebase64encstringgere" would be killer , but it never made it into the py version before new features were supposedly accepted and the go guys immediately said "no" trying to even mention getting it into go at all soooooo â¦
<imbrandon_> ahasenack: that was months ago tho , if you wanted to prod at them again :)
<ahasenack> imbrandon_: I subscribed to the bug, cloud-init even added an apt_proxy key because of it, all that's missing is juju using it
<imbrandon_> :)
<fcorrea> Hello folks. I'm on raring and it seems something went bad recently. I'm trying to deploy mysql on lxc but it doesn't seem to be working as if juju wasn't doing anything. It sits there and stays forever in a pending state
<fcorrea> any ideas on how to debug?
<fcorrea> juju debug-log shows nothing
<marcoceppi> fcorrea: If I recall correctly, juju local provider is broken on raring
<fcorrea> marcoceppi, oh, bummer
<fcorrea> well, I guess I could create a quantal or precise vm and get things done I guess
 * marcoceppi looks for the bug #
<marcoceppi> You should have gotten a nasty error when trying to bootstrap
<imbrandon_> hazmat / marcoceppi : you happen to know if the current openstack provider works on RAX ? or is it still a no-go
<imbrandon_> pyjuju fwiw
<fcorrea> marcoceppi, didn't get any error at all. It was all fine but nothing really happens after deploying
<marcoceppi> fcorrea: You set a data-dir in your envionrments file, in there should be a machine-agent.log or something similar, if you could paste that in to a pastebin it'd be helpful to start debugging
<fcorrea> marcoceppi, that's much more helpful: http://paste.ubuntu.com/5701877/
<fcorrea> mmm...I'm wondering if my python got messed up
<marcoceppi> fcorrea: yeah, that's the lxc/raring bug
<fcorrea> marcoceppi, gotcha
<marcoceppi> fcorrea:  https://bugs.launchpad.net/ubuntu/+source/juju/+bug/1130809
<fcorrea> marcoceppi, thanks. subscribed
<brunogirin> Hi all, trying to run juju-core on Raring with EC2: juju status after bootstrap goes into infinite loop, any ideas what I can be doing wrong?
<hazmat> imbrandon, no go
<imbrandon_> okies, ty
<hazmat> imbrandon, shouldn't be hard to amend.. it needs extensions detection in the client, and disabled nova network security-group/firewalls as they don't exist there.
<hazmat> the rax idiosyncratic auth security is implemented
<imbrandon_> cool, yea iirc thats getting flattened out now
<imbrandon_> well not that specificly but where rax != os
<hazmat> cool
<imbrandon_> http://www.rackspace.com/blog/ramping-up-our-openstack-investment-involvement/
<imbrandon_> ^^ if ya hadent seen it
<imbrandon_> some other cool things comming out of RAX fast too, like 5 new complete SDK's that work on pure openstack out of the box and rax secondly
<imbrandon_> including http://php-opencloud.com
<imbrandon_> :)
<imbrandon_> someone lit a fire under their arse i think :)
<imbrandon_> its funny seeing all the activity for the last ~10 days on their github repos , the rush before OpenStack Summit :)
<hazmat> its present for all the ostack contributors
<ahasenack> SpamapS: hi, around?
<ahasenack> SpamapS: wondering how to correctly use jitsu status
<ahasenack> this hangs forever, for example:
<ahasenack> jitsu watch ubuntu/0 --state=started --failfast
<ahasenack> there is no service deployed in this env, just bootstrap
<ahasenack> ok, timeout in front
<marcoceppi> How hard would it be to add some way to track who promulgates what to the store?
#juju 2013-04-13
<AskUbuntu> cloud:nova-compute:apt-get: cgroup-lite Size mismatch | http://askubuntu.com/q/280983
#juju 2013-04-14
<wedgwood> I'm trying out juju-core using my existing openstack config and I'm getting an error on bootstrap
<wedgwood> 'error: secret-key: expected nothing, got "[...]"'
<wedgwood> I'd expect that I'd need that value. Is the openstack provider working yet?
<AskUbuntu> Running Juju on a local server | http://askubuntu.com/q/281530
#juju 2014-04-07
<jose> guys, is it fine if a non-charmer reviews a charm pointing some errors? I'd like to do some review when I have some time
<davecheney> jose: yes, absolutely
<davecheney> many hands make light work
<jose> cool, thanks! :)
<cory_fu> Good morning, all.
<tvansteenburgh> morning cory_fu
<mbruzek> We have a system that juju status tells us that 2 system's agent state is "down"
<mbruzek> Is there a way to restart the juju agents on a system?
<lazyPower> mbruzek: upstart should handle restarting the juju-agent
<lazyPower> I've brought several machines up and down this weekend that are manually provisioned. There is an upstart task for resuming operation -- i dont know if they fail out if the bootstrap node is unavailable... i didn't look that closely at it.
<mbruzek> lazyPower, juju status shows 2 agents down, so presumably the upstart is not working.  Can we manually restart them?
<lazyPower> should be able to just call the service back up. let me remote into a jujub ox and peek at /etc/init
<lazyPower> mbruzek: jujud-machine-1.conf exists
<lazyPower> that should be the agent. so service jujud-machine-# restart should fix it
<tvansteenburgh> lazyPower: ever seen this error:
<tvansteenburgh> ERROR state/api: websocket.Dial wss://10.0.3.1:17070/: x509: certificate signed by unknown authority...
<tvansteenburgh> http://pastebin.ubuntu.com/7216718/
<lazyPower> Yeah, it was bootstrapped by another workstation and your client is bugging out over the ssl signature
<lazyPower> this is why we actively suggest jumpstations for working on remote systems as a team presently
<tvansteenburgh> i don't see how that's possible, this is local
<lazyPower> did you wipe your juju config between bootstrap and now?
<lazyPower> those are the only 2 cases where i've seen that.
<tvansteenburgh> you mean environments.yaml?
<lazyPower> nah, i mean the jenv stuff
<tvansteenburgh> i don't even know where that is
<tvansteenburgh> so i don't think i deleted it
<tvansteenburgh> any way to work around this?
<marcoceppi> tvansteenburgh: it's in ~/.juju/environments
<marcoceppi> tvansteenburgh: you have to tear down and start again
<lazyPower> ^
<tvansteenburgh> if i force destroy the env then rebootstrap, i get this error running juju status
<marcoceppi> tvansteenburgh: that's, interesting
<tvansteenburgh> i seem to do all sorts of interesting things
<marcoceppi> tvansteenburgh: destroy again
<marcoceppi> rm -f ~/.juju/environments/local.jenv
<marcoceppi> try again
<tvansteenburgh> after destroying, there's nothing in ~/.juju/environments
<tvansteenburgh> the full error is:
<tvansteenburgh> ERROR state/api: websocket.Dial wss://10.0.3.1:17070/: x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "juju-generated CA for environment \"local\"")
<tvansteenburgh> filed a bug on it this morning but i'm really just looking for a way to get passed it now
<marcoceppi> tvansteenburgh: destroy, again, run bootstrap with --debug flag
<marcoceppi> paste output
<tvansteenburgh> marcoceppi: http://pastebin.ubuntu.com/7217166/
<tvansteenburgh> uhhh
<tvansteenburgh> and now juju status works fine
<marcoceppi> you're welcome \o/
<marcoceppi> ;)
<tvansteenburgh> \o/
<lazyPower> bonus
<overm1nd> hi marcoceppi, do you have any plan to update the wordpress charm?
<marcoceppi> overm1nd: I've had plans to update that charm for over a year
<marcoceppi> overm1nd: it's the time I don't have, did you want something done?
<overm1nd> ahaha I can feel the pain
<overm1nd> I was interested in multisite support
<marcoceppi> overm1nd: yeah, I have a plan for that, which would work nicely with juju
<marcoceppi> overm1nd: I may try to attack it this weekend, unless you wanted to give it a go
<overm1nd> I tried but it feels too complex for me
<overm1nd> I also tried to start a new charm but I can get the sourse form charm get command
<overm1nd> from*
<overm1nd> I know I don't need it but something is vroken in my env I need to look at it
<overm1nd> to tell the true I'm also evaluating puppets and ansible
<webbrandon> any of you ever ran into these when mining: http://paste.ubuntu.com/7217201/ ?  I just started a priate network server and all my miners get that, trying to think of why?
<webbrandon> oops wrong chat
<smoser> hazmat, responding to a friday ping now. you still need anything?
<hazmat> smoser, mr wolf was calling in reinforcements at the time, i think its all clear atm.
<smoser> well, yeah.
<hazmat> smoser, how was the game?
<smoser> game was cold (very strong gusts of wind, ~40 degrees F and ~30F with windchill). and cubs lost. but i was at wrigley field for opening day, so that part was awesome.
<nessita> hello everyone, I'm having an issue with juju deploy that is showing up since I updated juju-core to 1.18. Error is: "only charm store charm references are supported, with cs: schema" when deploying the solr-jetty charm
<nessita> more debug output in https://pastebin.canonical.com/107885/
<nessita> any ideas?
<nessita> perrito666, would you know who can help me with that ^?
<noodles775> mbruzek: Hi, if you've time, can you check this charm-helpers MP: https://code.launchpad.net/~michael.nelson/charm-helpers/fresh-ansible-relations/+merge/214203
<noodles775> mbruzek: bloodearnest has already checked it and is +1 (hrm, bloodearnest, can you add your vote there when you get a chance)
<mbruzek> Yes I will have a look
<noodles775> mbruzek: he's the other ansible-support user besides me :-)
<noodles775> Thanks.
<bloodearnest> noodles775: done
<tclarke> having trouble setting up juju to run disconnected from the internet on openstack. I did a juju sync-tools --all --local-dir=/tmp/juju   on a connected machine, moved the result to a web server accessible by the openstack system
<tclarke> then I created an "images" diretory on the web server and grabbed releases/precise/*12.04*amd64*  (the only vesion I care about for this deployment)
<tclarke> also grabbed cloud-images release/streams/v1/index.json and com.ubuntu.cloud:released:download.json
<jcastro> sinzui, _THANK YOU_ for such finely detailed release notes
<jcastro> that is an epic announcement
<tclarke> removed all the entries which weren't 12.04 amd64
<tclarke> added "tools-metadata-url: http://server/juju/tools" and "image-metadata-url: http://server/juju/images" to my environments.yaml
<tclarke> juju -v bootstrap gives me "ERROR bootstrap failed: cannot start bootstrap instance: invalid URL "http://cloud-images.ubuntu.com/release/streams/v1/index.json" not found"
<tclarke> what am I missing?
<sinzui> tclarke, /release/ is missing a "s", /releases/
<tclarke> thought of that...I have a symlink between "release" and "releases"
<sinzui> tclarke, you may not need to set that since that is the default location to search
<tclarke> so both exist
<tclarke> here's the full directory listing served by the web server: http://www.pastebin.ca/index.php
<tclarke> hang on, wrong link
<tclarke> here, pastbin.ca doesn't want to work for me :)    http://pastebin.com/eVP12L8q
<AskUbuntu> MAAS / nodes have limited internet access after install | http://askubuntu.com/q/444563
<sinzui> tclarke, oh, yes, restricted networks must open access to that machine or mirror the images locally.
<tclarke> sinzui: I release that...that's what I'm trying to do but juju is ignoring my image mirror config and is still trying cloud-images
<sinzui> tclarke, right, and image-metadata-url is the right option to change :(
<bloodearnest> hmm - it's only the MaaS provider that can place use --to lxc:n ?
<tclarke> is there an equiv of "juju sync-tools" for the images?
<sinzui> tclarke, I configure tools-metadata-url most of the time and it honoured. There is no effort to talk to streams.canonical.com when I change tools-metadata-url
<tclarke> or a similar tool...I suspect I'm either missing something in the image repo or it's not properly setup but there seems to be no documentation on creating a mirror
<sinzui> tclarke, maybe there is a jenv file present...juju is ignoring the yaml file.
<sinzui> tclarke, you may need to delete $JUJU_HOME/environments/my-env.jenv
<tclarke> I've attempted "rm -rf .juju/environments" a couple times...didn't seem to help
<sinzui> tclarke, juju does not warn you that it found a cached config and is using it
<sinzui> jamespage, do you have any experience with image-metadata-url being ignored with maas configs as tclarke reports?
<tclarke> not maas tho, openstack
<tclarke> progress....I seemed to have been missing a level in the directory tree..
<tclarke> when I fix that, I get a different error about an invalid index.json.
<tclarke> my index.json is: http://pastebin.com/vRc6USiS
<Fishy_> My MAAS import boot images never finishes.. any idea how I can check the status of it?
<Fishy_> something is broked
<Fishy_> looking at /var/log/maas
<Fishy_> nothing good there
<Fishy_> well ok I get this
<tvansteenburgh> who is the resident mysql charm expert?
<tvansteenburgh> i have a master:slave setup, and the reads are going to master and writes to slave
<Fishy_> http://paste.ubuntu.com/7217896/
<Fishy_> does that error make sense to anyone
<Fishy_> Invalid: The input field 'username' was not expected.
<Fishy_> argh
<lazyPower> tvansteenburgh: that would be marcoceppi
<jcastro> jose, ping
<jose> jcastro: pong
<jcastro> hey if a charm doesn't have a logo
<jcastro> you don't need to make an icon for it
<jcastro> example: pictor
<jose> oh, pictor, dustin made those
<jose> I emailed him and asked and he replied with the logos
<jcastro> oh well you can remove them
<jose> huh?
<jose> why's that?
* lazyPower changed the topic of #juju to: Weekly Reviewer: bcsaller || Welcome to Juju!  || Docs: http://juju.ubuntu.com/docs || FAQ: http://goo.gl/MsNu4I || Review Queue: http://goo.gl/9yBZuv || Unanswered Questions: http://goo.gl/dNj8CP
<jcastro> jose, it's better to just show a blank icon than a made up one with no branding
<jose> oh, you meant for the ones that don't have an icon
<jcastro> it's literally an orange box with text, it looks like it was made by a developer
<jose> :P
<jose> I'll make sure to fix that asap
<jose> you know what, I'll do it now
<jose> but pictor should be good to go, I think
<jcastro> yeah, either a totally awesome icon that matches upstream's brand, or a template
<jcastro> marco had some other feedback
<jose> I'll wait for it, then
<mhall119> jcastro: marcoceppi: I have a django charm that uses gunicorn, but I need something to service static files, what's the right path for this?
<mhall119> I should also note that it currently uses multiple gunicorn nodes behind an haproxy
<mhall119> and currently the static files are references with relative URLs like /static/css/site.css
<lazyPower> mhall119: probably co-locate apache2 and gunicorn on the same machine, or nginx
<lazyPower> and use the web daemon as a reverse proxy to the django app, and used for serving the static assets
<lazyPower> i dont have much experience with gunicorn charm however - so this may be a bad suggestion... but thats my knee jerk reaction to the question.
<mhall119> jcastro: can I use swift-dependent things when deploying to my local LXC for testing?
<bloodearnest> mhall119: we have 2 approaches to servcing static resources:
<bloodearnest> 1) a custom subordinate charm that builds/tars/deploys the assets on a service's units, then set the service's vhost config (apache in our case) to serve the static assets directly
<bloodearnest> 2) a deploy to a swift bucket and then a proxy through apache to swift
<bloodearnest> 2 is nicer, and less work
<mhall119> bloodearnest: do you have an example config for 2?
<marcoceppi> mhall119: apache2
<bloodearnest> mhall119: I do't, we're just trying that approach now - but noodles775 can probably help
<bloodearnest> *don't
#juju 2014-04-08
<jose> marcoceppi: if you give me a min I'll fix that byobu-classroom thing I messed up
<sarnold> marcoceppi: s/server/service/  on line 51 https://github.com/juju/plugins/blob/master/juju-setall#L51
<marcoceppi> jose: all charms that you did that for need updating, I noticed pictor as well
<marcoceppi> jose: if it didn't originally have a config.yaml there's no need to add one
<jose> yeah, I got that part now
<jose> byobu-classroom has been fixed
<jose> and, updating in what sense?
<marcoceppi> jose: as in
<marcoceppi> if it didn't originally have a config.yaml there's no need to add one
<jose> I didn't do that to all MPs, some of them are only a conversion from text to markdown
<jose> pictor is being fixed now
<jose> and pictor's been fixed
<jose> sorry for being a bit messy with my MPs
<jose> also, the bip MP has been checked by arosales but needs the word of a charmer
<arosales> marcoceppi: the bip one was the one odd one where proff failed on a 2 character config key
<arosales> jose: hello, btw
<jose> hey, arosales! how's it been?
<arosales> goin well thanks, yourself?
<jose> all good, all good!
<arosales> good to hear, jose: looks like you have been busy with mps :-)
<jose> yeah, did those a couple weeks ago and some of them are a bit messy, but fixes should be easy (in most cases)
<arosales> jose: good to see the contributions
 * arosales grabs some dinner, but will be back on line later tonight
<jose> enjoy!
<jose> for juju devs: thank you for version 1.18, it's simply awesome. I love it!
<jose> thanks, lazyPower :)
<lazyPower> jose: ping
<jose> lazyPower: pong
<lazyPower> jose: the icon for pictor still doesnt meet guidelines. Can you rem it so i can ack this merge?
<jose> will do now
<jose> in the meantime, may I ask why?
<lazyPower> it looks like a developer made it. We're looking for HQ icons that add diversity to the landscape of the charms, and have a tight attention to detail. I mean not to offend - only explain.
<jose> hmm, ok, then
<lazyPower> the white pixel borders on the camera are what called it out for me.
<jose> it's been pushed
<lazyPower> thanks jose
<jose> np
<jose> I'll ask Dustin if he has the sources for the icon and fix it then, no white pixel borders!
<jose> lazyPower: if you're still around, it'd be awesome if you could do a review for https://code.launchpad.net/~jose/charms/precise/owncloud/update-version/+merge/214649
<jose> only updated the version to 6.0.2, just did a sucessful deploy
<lazyPower> ok. subsequently did you try a merge with the efforts in the ceph charm?
<lazyPower> there's been work on that front and i was goign to try to maintain compat with his work
<tvansteenburgh> jose: thanks for the review on the Meteor charm
<jose> lazyPower: is ceph having the same problem as owncloud (not in the repos and needs updating from time to time)?
<jose> tvansteenburgh: no worries, always glad to give a hand where possible :)
<lazyPower> jose: well, its a ceph relationship that was added. I was curious if you attempted to deploy that with the ceph modifications
<lazyPower> they will more than likely collide and require some massaging to work
<jose> hmm, nope, let me do that right now!
<jose> instance is still up :)
<lazyPower> do you have experience in deploying ceph?
<lazyPower> it can be a bit tricky if you dont have spare block devices to work with
<lazyPower> you need to assign a UID and a blockdevice to get it to deploy
<jose> no experience, but I suppose the readme should be enough?
<jose> huh
 * jose reads
<lazyPower> yeah :)
<lazyPower> give it a read and see how you fare
<lazyPower> it'll be good to get some fresh feedback on it
<jose> lazyPower: ceph's deployed and started, am I supposed to add a relation between owncloud and ceph?
<lazyPower> if you have merged in zchander's changes
<lazyPower> this is where it gets tricky, as its most likely goign to collide
 * jose checks
<jose> do you have a link for that branch? I don't seem to find it anywhere and it's not fixed in trunk
<lazyPower> sure 1 sec
<jose> thanks
<lazyPower> jose: https://code.launchpad.net/~xander-h/charms/precise/owncloud/ceph-support
<lazyPower> jose: Pulling your branch now
<jose> cool, thanks
<jose> lazyPower: found a bug on the ceph-support charm: once it's got status `started` port 80 is not open, which makes it unreachable
<jose> not even after it's related with ceph
<lazyPower> jose: that was nacked in teh review
<jose> let's manually open it and see what happens
<lazyPower> you should be able to pull the ceph bits from it - but if its too big of a change, it shouldn't be a problem for zchander to backport the changes and operate. I was going off of a "if it takes 15 minutes or less, lets maintain compatibilitiy"
<lazyPower> jose: it also lacks the mysql migrations for the latest edition of owncloud for scale out usage.
<lazyPower> so, its got a fair share of potential issues
<lazyPower> but as POC it shows merit :) re: my text above. and having it integrate with an openstack component ootb would be attractive to some I think.
<jose> hmm, yep
<jose> after the relation and manual opening the port, it *seems* it's running good
<lazyPower> add a mysql relation
 * jose does
<jose> to owncloud or ceph?
<lazyPower> owncloud
<jose> ok
<jose> w0ot, I had never gotten to machine #7 on my environment
<jose> I have now added the mysql relation to the owncloud charm and don't see any notable changes
<jose> seems to run seamlessly
<jose> lazyPower: ^
<lazyPower> bueno
<lazyPower> nice, and the ceph relationship works?
<jose> like, it's there, I don't know if there's anything special that should appear on owncloud?
<lazyPower> it should add a storage location
<lazyPower> add a file, and use the ceph utility to validate it made it over to ceph's storage
<lazyPower> i forget the name of the ceph cli command, may need some google fu
<jose> let me check, haven't used ceph before :)
<lazyPower> jose: eyyy simple updates, i like these reviews
<jose> 'now that U1 no longer exists as a file storage service, deploy your own cloud file storage service with juju and owncloud!'
<lazyPower> jose: jcastro filed that as a task last week :) Glad to know the community is on this as much as we are.
<jose> well, we try :)
<jose> (also, is there a task list?!)
<lazyPower> its internal
<jose> oh, there we go
<jose> lazyPower: when I do `ceph osd -m $addr` I get a bunch of faults
<lazyPower> did you add a block storage device + set a UUID?
<lazyPower> and can you paste the list of faults you get?
<jose> yep, I did
<jose> second
<jose> lazyPower: http://paste.ubuntu.com/7219897/
<lazyPower> is it exposed?
<jose> yes
<jose> (even though port wasn't open, I manually opened it)
<lazyPower> can you humor me and try from the owncloud host using the internal address?
<jose> sure, second
<lazyPower> s/the internal/ceph's internal/
<jose> lazyPower: same exafct series of faults
<jose> well, not exactly the same
<lazyPower> jose: ok. Thanks for looking at it jose
<jose> np
<lazyPower> i'm investigating an intermittant install hook error i got from the merge proposal
<lazyPower> works as expected in debug hooks
<jose> is that from which MP?
<lazyPower> the upgrade
<jose> ok
<jose> let me know if there's anything else I can give a hand with
<jose> I'm going to do a quick review of the graphite charm now
<lazyPower> jose: so far so good. it was only once and i think it ws due ot network latency
<jose> I have an instance with the upgraded charm running atm (ec2-54-208-209-6.compute-1.amazonaws.com if you wanna try, hasn't even been configured)
<lazyPower> just got it deployed. adding storage and scale out options now
<lazyPower> this makes run #4 - so no worries. I think i've got it down
<jose> cool :)
<jcastro> jose, seafile is supposed to be excellent too
<jcastro> wrt u1 replacement
<jcastro> ~negronjl made a charm, it's in his namespace, I think all it needs is an update
<jose> uuh, I'll take a peek at it in a while!
<jose> yay, it's in Bash! I'll see what can I do to get it finalized before trusty
<lazyPower> jose: your changes are great. ack'ing them. one suggestion for an improvement
<lazyPower> owncloud deploys with a sqllite database, if you upload files and later attach it to mysql - it wipes the file content listing. It may be a worthy exercise to migrate the data to the mysql host
<lazyPower> so, going to ask that you document that in teh README so its covered as expected behavior pending a patch, and this is g2g
<jose> ack!
<lazyPower> ah 1 sec, let me run an upgrade on a predeployed file structure, i almost missed that. but i think its going to be fine
<jose> lazyPower: so, how are users supposed to migrate the info from sqlite to mysql?
<jose> let's say, I'm not a sql expert, I don't know a word of sql (apart from INSERT)
<jose> oh and DROP
<lazyPower> jose: the idea is that the relationship hook would handle the translation. You would READ from the sqllite tables and transpose to mysql. I haven't looked at it in terms of effort
<lazyPower> but the datastructure should be the same. and theory states you would backup the sqllite database in the event of a catastrophic unforseen data migration error
<lazyPower> so its 1:1 copy'ing data across (in theory)
<lazyPower> if you want, we can look at this closer later this week and see about making a migration script
<jose> sure, I was thinking on that too
<jose> I'll just add that note in the meanwhile
<lazyPower> give it some cursory looking, examine the sqllite database, the mysql tables, and see difficult you think the migration will be
<lazyPower> i'll do the same, and we'll convene on this when you're ready
<jose> ok!
<jose> I'm also going to take a look at the seafile charm and see if I can do something to it, may be good to have yet another option inside the charm store
<lazyPower> Indeed :)
<jose> lazyPower: fix pushed!
<lazyPower> jose: thanks for the quick turn around
<jose> :)
<lazyPower> bummer
<lazyPower> it fails upgrade
<lazyPower> unit-owncloud-0: 2014-04-08 04:23:46 INFO upgrade-charm find: missing argument to `-exec'
<jose> huh?
<lazyPower> if you deploy the existing charm revision, and merge in your branch and run an upgrade
<lazyPower> it fails on upgrading the charm
<jose> hmm, I didn't write that upgrade-charm hook
<jose> let's take a look at it
<lazyPower> i'm aware
<lazyPower> but we need to triage it for existing users :(
<jose> yeah
<jose> is there a way I can upgrade the charm from a charm store revision to a local revision?
<jose> that way I could try and debug what's happening
<lazyPower> charm get owncloud && deploy as local
<jose> oooh right
<lazyPower> jose: the fix is terminating it with an escaped semicolon
<lazyPower> find /var/www -mindepth 2 -maxdepth 2 -type d -name data  -exec mv {} /var/tmp/ \;
<lazyPower> i'm not sure how this ever worked...
<jose> lolwat
<lazyPower> yeah, update the hook once you've got it confirmed as failing during upgrade and test the fix
<lazyPower> it should clean it up and make it g2g
<jose> will do, bootstrapping the env
<lazyPower> jose: ah well it makes it further, it also hangs on the data directory being moved in 6.0
<jose> then, escaped semicolon again?
<lazyPower> well the 2 lines of find need a semicolon, thats been patched
<lazyPower> but it runs install, which deplys a different file structure
<jose> hmm
<lazyPower> they moved the data directory for multi-user install isolation instead of having a "pool" of files on the server
<lazyPower> actually no, i take that back, its just flat out missing the data directory which is whats causing this
<jose> which means files need to be moved where the data is now
<lazyPower> yeah, this hook has some idempotency issues
<lazyPower> if you rerun it after the upgrade, you're going to wipe up any uploaded data
<lazyPower> which is a big nono
<lazyPower> a better option would be to create a sentinel file that if present, dont backup the data, and when it successfully moves it back into owncloud, it removes the file so the next upgrade can do what its supposed to do
<lazyPower> let me spec this and i'll show you some code
<lazyPower> jose: also, the config file format has changed.
 * jose checks
<jose> what's the difference?
<lazyPower> 1 sec i'll pastebin it
<jose> ok
<lazyPower> strange, pastebinit hates this
<lazyPower> let me see if it accepts it as is - i haven't actually finished the migration to verify it wants all these extra config options
<jose> sure, I have no hurry
<jose> also, if you're tired and want to finish this tomorrow I'm good with it, I have no classes :D
<lazyPower> nah i want to get this wrapped up
<lazyPower> its a pretty simple fix
<lazyPower> my cats keep distracting me
<lazyPower> they are telling me its time for bed
<jose> I have no cats, but have a 6lb bag of gummy bears
<lazyPower> thats a lot of gummy bears
<jose> indeed
<lazyPower> boom
<lazyPower> now its idempotent *and* works
<lazyPower> http://paste.ubuntu.com/7220050/
<jose> ok, and where is $CHARM_DIR defined?
<lazyPower> thats a juju environment export
<jose> oh, got it
<jose> checking right now
<lazyPower>  Cannot redeclare class OC_Config in /var/www/owncloud/lib/config.php on line 41
<lazyPower> well, its nearly a win
<jose> :P
<jose> is that because of the old configs?
<lazyPower> there... is no line 41
<lazyPower> i wonder if its picking upt eh sample config
<jose> huh, maybe
<lazyPower> nope
<jose> I can't do much right now, bootstrapping, once it's done I'm going to check
<lazyPower> i think there are extra files left over from the upgrade
<jose> maybe the -u flag can give some clues?
<jose> (set -u)
<lazyPower> so, maybe what needs to happen is once the upgrade hook copies out the data, and configuration, it should completely wipe the owncloud application files... but this is bad news for usrs of plugins
<lazyPower> thats only going to warn you of unset variables in bash hooks
<lazyPower> this is coming from the apache logs
<jose> oh
<lazyPower> the only simple fix i see here jose, is to target trusty and revisit this later
<jose> I think we can do that
<lazyPower> unless you want to cycle on the upgrade hook and see if you cant track down what needs to happen
<jose> I'll take a look later on and see what's happening
<lazyPower> but you have a few things to consider, 1) data integrity of the entire owncloud installation. that includes plugins that users may have installed.
<lazyPower> and 2) data migrations for any upgrades that need to happen. eg: if a plugin format has changed from 4.x to 6.x - mayb eadd some owncloud apps once you get the base system working, then re-run the upgrade after you've added owncloud apps
<lazyPower> i'm going add a note to the MP that its great on fresh installs, but needs work on upgrades
<jose> thank you for staying up to review this
<jose> lazyPower: thanks for the review and your work on this, have a good night!
 * jose checks seafile
<lazyPower> No problem :)
<frankban> rbasak: morning. just released quickstart 1.3.1: https://pypi.python.org/pypi/juju-quickstart/1.3.1 . It only includes bug fixes, and a packaging module that can be used to switch juju packages source: http://bazaar.launchpad.net/~juju-gui/juju-quickstart/trunk/view/head:/quickstart/packaging.py
<rbasak> frankban: great, thanks. I think this is fine to upload. I'll look at it today.
<frankban> rbasak: thank you!
<ev> did juju at some point stop reusing machines with no services deployed to them? I just stuck destroy-service in a loop and then ran another deploy. To my surprise it just created new cloud instances rather than picking up the unused ones it already had in its brain.
<marcoceppi> ev: yes, since the golang rewrite
<ev> marcoceppi: :(
<marcoceppi> ev: I know :(
<ev> marcoceppi: if I deploy into containers on the machine, will that work around the problem?
<ev> I'm trying for really fast deployments
<ev> so my hope was that I could just destroy-service to a clean slate
<ev> and didn't have to provision the machines all over again
<marcoceppi> ev: yeah, you can actually run juju add-machine with no parameters to provision clean machines to be used during deploy
<ev> though I suppose that doubles the time of the first deployment, since it needs to create and upgrade the container
<marcoceppi> ev: but there's no "clean up" when you destroy service
<marcoceppi> which is why they're not reused, and in the future the machines may auto termiante when no remaining services are on them
<bloodearnest> word of warning - the new lxc cloning and squid-deb-proxy-client support in lxc do *not* work well together
<tvansteenburgh> for a boolean type in a charm's config.yaml, will `config-get` always print 'true' or 'false'?
<tvansteenburgh> or is it 0/1?
<tvansteenburgh> can't find this in the docs
<mthaddon> tvansteenburgh: looks to be "true" or "false" to me (from looking at a live env)
<tvansteenburgh> thansk
<tvansteenburgh> thanks
<lazyPower> yep. its true/false
<jcastro> lazyPower, hey, I just saw on the calendar, we have an OSX Workflow charm school this Friday
<lazyPower> right on
<lazyPower> i guess i should charge my air
<jcastro> hey you wanna announce it on the list? a reminder?
<jcastro> you just need to copy my template from my last one
<lazyPower> sure. after i'm out of this meeting you got it my an
<lazyPower> s/an/man
<noodles775> bcsaller: Hi! Could you please review https://code.launchpad.net/~michael.nelson/charm-helpers/fresh-ansible-relations/+merge/214203 when you have time?
<noodles775> bcsaller / jcastro : Also, what's the policy for merging changes to charm-helpers? bloodearnest and I have been waiting for reviews/landings for a while, and I notice that (according to LP) I can also merge to lp:charm-helpers. Are you ok with me merging approved branches?
<jcastro> marcoceppi, ^^^
<marcoceppi> noodles775: they should show up in the revq
<jose> lazyPower: the calendar link you emailed is canonical-only I think
<lazyPower> obamaaaa
<jcastro> lazyPower, want me to make you a public one?
<jose> we've got the on-air link
<jose> https://www.google.com/calendar/render?eid=MDJiMzY0MDc2YmM1bGxtMzg0MGE2a2dzb3MgZG5vM2lwMG1zZzU1MmRlaTNlM3I3bThqbDBAZw&ctz=Etc/GMT&sf=true&output=xml should do good
<lazyPower> it doesn't let me view it in private browsing
<lazyPower> jcastro: that would be excellent
<mgz> heh, that calendar link hilited me
<noodles775> marcoceppi: So maybe I did the wrong thing by marking this one as approved? https://code.launchpad.net/~bloodearnest/charm-helpers/add-ips-address-to-template-context/+merge/201455
<noodles775> marcoceppi: but my question is more, can I merge that into lp:charm-helpers (ie. do you mind if i do so, or do you want to revert my privs there - I've not tried, but just going from LP)
<jcastro> noodles775, any help with the charm queue would be appreciated! *cough*
<noodles775> jcastro: great, bloodearnest and I are already reviewing each others charm-helper branches, but it'll be easier if we can land them too :-)
<jcastro> that would get a +1 from me
<jcastro> that's what we do for webops/IS.
<jcastro> marcoceppi, how do you guys feel about letting them review each other's branches?
<marcoceppi> jcastro: that's fine, but this is why I hate having an "inactive charmers" in the charmers team
<marcoceppi> we should have a charm-helpers team, with charmers in it
<marcoceppi> not the other way around
<jcastro> ok
<jcastro> so let's do that?
<marcoceppi> otherwise stuff falls out the queue and everyone has to be a charmer to do anything
<marcoceppi> jcastro: sure, that fixes this one case, but it doesn't resolve the other projects or resolve why we have to have inactive charmers
<marcoceppi> it's a bigger issue
<marcoceppi> noodles775: yes, you can merge stuff that others have approved for charm-helpers
<jcastro> ..... ok let's punt that one to the sprint then
<jcastro> then we'll just fix it
<marcoceppi> yes
<mbruzek> Congratulations jose your roundcube work was my first merge to the charm store!
<mbruzek> Thanks for the work on that charm, it should be available in the store!
 * lazyPower claps
<lazyPower> awesome mbruzek! How's it feel to merge into the store?
<mbruzek> scary
<lazyPower> right?!
<mbruzek> I don't want to mess it up and face the wrath of the other charmers!
 * lazyPower readies the blackjack
<jose> congratulations to you, mbruzek, for becoming a charmer! :D
<mbruzek> Thank you jose
<rbasak> frankban: "Support MachineInfo addresses" is the fix for bug 1301464, right?
<_mup_> Bug #1301464: The mega-watcher for machines does not include containers addresses <addressability> <api> <juju-gui> <juju-core:Fix Released by wallyworld> <juju-core (Ubuntu):Fix Released> <juju-quickstart (Ubuntu):Triaged> <https://launchpad.net/bugs/1301464>
<frankban> rbasak: yes
<rbasak> OK, thanks. Testing a candidate package now.
<frankban> rbasak: and "Support the --ppa flag for distro packaging." is the other commit, which introduces the packaging module that can be changed to switch the distro package to not use the ppa
<frankban> rbasak: cool thanks
<AskUbuntu> Cannot deploy local charms after Juju upgrade | http://askubuntu.com/q/445025
<jose> jcastro: hey! is lp:juju-core/docs the right branch to MP to get a fix in juju.ubuntu.com/docs?
<jcastro> no
<jcastro> github.com/juju/docs
<jose> cool, thanks
<jcastro> instructiosn in the readme there.
<rbasak> frankban: in testing, I'm stuck at "retrieving the Juju API address". Could this be related to the bug 1301464 fix?
<_mup_> Bug #1301464: The mega-watcher for machines does not include containers addresses <addressability> <api> <juju-gui> <juju-core:Fix Released by wallyworld> <juju-core (Ubuntu):Fix Released> <juju-quickstart (Ubuntu):Triaged> <https://launchpad.net/bugs/1301464>
<rbasak> frankban: I think I'll upload my candidate package to my PPA first. Would you be able to test that to see if it's any good, please?
<frankban> rbasak: sure
<frankban> rbasak: I don't think that's related. IIRC that's just a call to "juju api-endpoints"
<rbasak> frankban: OK. I think I've just found the reason. Out of space in / on my host test machine :-/
<rbasak> I'll upload to my PPA anyway - I'm running out of time before my EOD today.
<frankban> rbasak: running with --debug can help in general
<frankban> yeah, same here, EOD in 20
<rbasak> frankban: uploaded to ppa:racb/experimental. I'll test again first thing tomorrow if you don't beat me to it. If it's good, the PPA package is good to upload to the archive with just a version bump.
<rbasak> Sorry it took so long. Because it usually takes a while I just let it get on with it, and didn't see any errors about disk space on my KVM host :-/
<frankban> rbasak: perfect, thank you
<frankban> :-/
<frankban> rbasak: I'll try to use the package with local and ec2 tomorrow morning
<rbasak> frankban: great, thanks. Looks like it built successfully.
<frankban> rbasak: cool, I see it being published
<tvansteenburgh> anyone have a good pattern for making sure that a chunk of hook code only runs on one unit of a service?
<tvansteenburgh> for example, say i have 2 sugarcrm units deployed behind haproxy. i `juju set` some config values that cause the db that sugarcrm is using to be updated. i only want to run that db update from one unit, not both.
<marcoceppi> tvansteenburgh: use peer relations to do a leader election
<tvansteenburgh> marcoceppi: thanks, i'll look into that
 * tvansteenburgh goes to read up
<marcoceppi> tvansteenburgh: something like "okay, this is my unit number, is it less than the unit I'm connecting to? no? them I'm the leader"
<marcoceppi> tvansteenburgh: you'll have to do a bit of round robin until everyone is settled
<marcoceppi> I've been planning on writing it in to charm-helpers for a while, just no time
<tvansteenburgh> marcoceppi: roger. i'd be interested in contributing that, once i grok how it should work
<Makyo> Question from Twitter: how is security handled?  Is that on the cloud provider?
<marcoceppi> jcastro mbruzek tvansteenburgh et al, FYI, bug in 1.18.0 and here is the workaround: http://askubuntu.com/a/445101/41
<lazyPower> Makyo: are you talking firewall level security?
<Makyo> lazyPower, sorry, was out.  I think this is wrt the OpenSSL bug today.  How does one manage machine level security.
<Jeffrey_> I am trying to deploy a local charm. I have the charm in ~/charms/percise/<charm name>. Neither "juju deploy --repository=./charms rethinkdb" nor "juju deploy --repository=./charms local:rethinkdb" worked, both showing errors. I know I am probably missing something very stupid.
<Jeffrey_> I've also check the charm structure with charm proof.
<Makyo> Jeffrey_, I believe it's looking for a repo that contains a charms dir, which contains series dirs.  Thus, `juju deploy --repository=~ local:precise/rethinkdb
<Makyo> `
<Makyo> I'm checking now, though, so I can verify in a sec.
<Jeffrey_> Makyo: That was it... Thanks knew it was stupid.
<lemao_> ola! Does juju provide a simple library for idempotent tasks using bash? E.g. add a user if not already there etc? I can use ansible, salt, etc but was wondering if there is a lightweight alternative that only focuses on idempotent commands
<lazyPower> lemao_: if they exist i am not aware of th em.
<sarnold> sounds like a useful thing to have :)
<lazyPower> I've taken to using a few conventions in my own scripts when doing particularly destructive commands, if its just template population and somesuch, I'll let it fly regardless as its a fairly inexpensive operation. But as sarnold has said, that does sound like a fairly nice utility to have in the toolbox.
<lazyPower> there were a set of convenience methods - charm-helpers, but they moved to python, and there will be a compatibility layer added at some point that allows other languages to leverage those conveniences added. Some of which would aid in idempotency through a cache layer, like checking config values against an key/value pairing
<lazyPower> but thats in the future, so today, as far as I know, one does not exist.
<lemao_> lazyPower sarnold: yes, I find myself wanting to perform some simple tasks in a robust, idempotent way and bringing in something like chef, salt, ansible seems like over complicating things
<lazyPower> lemao_: If you start the library, and its of good quality, i would be more than happy to promote its use as users ask about it
<lemao_> lazyPower: let's see how it goes. I don't want to get side tracked into a tangent.
<lazyPower> lemao_: ack
<lazyPower> just throwing that out there
<lemao_> lazyPower: sure! At this point it is more of a validation that this is a good idea.
<lemao_> lazyPower: assuming something like this exists and grows over time, what is the best practice in terms of sharing it across charms? Package it into a .deb package?
<lazyPower> evaluate how charm-helpers works. I would do something similar that has a shared repository model that embeds the current revision of the helpers in the charm itself
<lazyPower> but allows for easy upgrades with a configuration yaml
<lazyPower> maybe, write a python driver to propigate the changes so you're not reimplementing a ton of stuff thats solved for you already.
#juju 2014-04-09
<mwhudson> are there many charms for trusty yet?
<mwhudson> or does that sort of thing wait until after trusty is out?
<Kupo24z> Hey all, is it possible to select a MAAS node when running the first 'juju bootstrap' to setup the juju environment? I have pre-defined nodes and I'd like to make sure it provisions on the right one
<davecheney> mwhudson: the latter
<mwhudson> davecheney: is the cheat in the mean time branching all the precise repos to $localcharmrepo/trusty or however you do that?
<rick_h_> mwhudson: do you need them to run on trusty? You can just juju deploy precise/mysql for instance?
<davecheney> mwhudson: correct
<mwhudson> rick_h_: i am mostly interested in architectures that are only supported in trusty :-)
<davecheney> rick_h_: that is the main rason
<rick_h_> mwhudson: if you download zip of a charm you can drop it on the gui and pick the series to use
<rick_h_> mwhudson: ah gotcha
<mwhudson> well i guess armhf was in precise but i still want to be closer to the bleeding edge
<mwhudson> rick_h_: ok, bit tedious when dealing with a gigantic bundle
<mwhudson> (unless there is a trick for that too)
<rick_h_> mwhudson: definitely, no trick until we get fat bundles and the like
<mwhudson> ah ok
<melmoth> hola juju people ! With juju 1.18.0 i have been told one can tell juju, when using openstack provider, to use a given neutron network for vms (so it does not die when the current tenant as several available networks).
<melmoth> any idea actually how to tell juju wich net to use ?
<melmoth> ahhh, most probably the new network setting that is in environment.yaml
<frankban> rbasak: morning. the new quickstart package(in your ppa)  seems to work very well
<rbasak> frankban: great, thanks! I'll upload.
<frankban> rbasak: cool thank you
<rbasak> (done)
<noodles775> I'm having to `sudo pkill mongod` each time I re-bootstrap a local environment. Is this known, or the same issue as bug 1208430 ?
<_mup_> Bug #1208430: mongodb runs as root user <mongodb> <juju-core:Triaged> <juju-core (Ubuntu):Triaged> <https://launchpad.net/bugs/1208430>
<melmoth> i m experiencing problem with juju 1.18.0 tyring to use it behind a proxy.
<melmoth> https://pastebin.canonical.com/108082/
<melmoth> if anyone has an idea, you are welcome
<mgz> melmoth: can you access the maas API directly using HTTP_PROXY envvar and wget say? doesn't seem like a juju related error
<melmoth> i dont need a proxy to access MAAS, and the proxy had no access to MAAS directly
<melmoth> (all my boxes, MAAS as well are vm in a hypervisor;hosted on some libvirt network). The proxy is a real proxy that my main hypervisor must use to access external boxes
<melmoth> so if i try to access the maas box via the proxy, it will not work.. is it what is going on here ?
<mgz> melmoth: printenv?
<melmoth> mgz http://pastebin.com/UsBbLR5u
<melmoth> (i m using juju directly from the box where maas is installed)
<mgz> melmoth: that's odd. what about /home/ubuntu/.profile and /home/ubuntu/.juju-proxy? running juju from inside of the maas... that it's using itself? seems shakey.
<melmoth> i always did it that way.
<melmoth> i mean, juju is just a client, it should be ran from...anywhere
<melmoth> i could fire up yet a new vm just for it, but, i never had to, nor do i see the point
<melmoth> lets have a look at the profile
<melmoth> http://pastebin.ubuntu.com/7226462/
<mgz> well, I have no idea where your juju client is getting the proxy setting from then.
<melmoth> well, from the nevironement most probably, i did set juju set http-proxy
<melmoth> because i need the new machine to use the proxy
<melmoth> that was the whole point of the exercice
<melmoth> but i dont need communication from the juju box or the bootstrap node to MAAS to be proxyed
<melmoth> hmmm
<melmoth> may be i should set no-proxy for the maas box ?
<mgz> well, you are telling the bootstrap node to use a proxy with that
<mgz> affecting the local client when uploading a charm is a little odd
<mgz> melmoth: you probably should
<melmoth> i m just not sure what name to use or wich actual machine it s trying to cennect to
<melmoth> let s try with 192.168.101.2
<melmoth> \o/
<melmoth> thanks mgz :-)
<mgz> workies?
<melmoth> looks like it did. At least i have a prompt. it s a subordinate so nothing happened, that s normal. let s continue deployment
<melmoth> yeah, mysql seems to be under deployment
<mgz> ace.
<cory_fu> So, I'm new to bzr.  If I want to submit a (set of) change(s) for review on charm-helpers, do I just do bzr push lp:~johnsca/charm-helpers/name-for-branch in my local copy?
<timrc> arosales, Hey I pushed a fix for: https://bugs.launchpad.net/charms/+source/jenkins/+bug/1272520 -- but it's pushed to a +junk branch so I can submit an MP
<_mup_> Bug #1272520: Unable to relate to jenkins-slave: hook failed: "master-relation-changed" <audit> <jenkins (Juju Charms Collection):Confirmed for lazypower> <https://launchpad.net/bugs/1272520>
 * timrc really doesn't want any responsibilities associated with being a member of ~charmers, but this particular bug has bit us, and thought I would kill it with fire
<marcoceppi> cory_fu: yes, then do a bzr lp-propose lp:charm-helpers
<lazyPower> timrc: right on. I'll take a look closer to EOB unless you need it landed immediately.
<lazyPower> and follow marco's directive ^
<timrc> lazyPower, Nah, take your time.  Not hugely critical.  We are deploying from a branch atm... there is some weirdness around authentication too but I think that's separate
<timrc> lazyPower, Right-o.  Thanks
<arosales> timrc: thanks for the mp. I think I had filled a abug against jenkins . . .
<arosales> timrc: https://bugs.launchpad.net/charms/+source/jenkins/+bug/1272520 was the bug I was hitting did you see something similiar?
<_mup_> Bug #1272520: Unable to relate to jenkins-slave: hook failed: "master-relation-changed" <audit> <jenkins (Juju Charms Collection):Confirmed for lazypower> <https://launchpad.net/bugs/1272520>
<timrc> arosales, Well I didn't see any logs, so I can't be 100% sure but my master-relation-changed problem was due to the fact that the master-relation-changed hook was expecting the slave-relation-joined hook (run by jenkins-slave) to set 'slaveaddress' which it wasn't
<timrc> arosales, so master-relation-changed was assigning an empty string and passing that to hooks/addnode and causing breakage
<arosales> timrc: sounds promissing that your branch may resolve this issue
<arosales> timrc: thanks for taking the time to contribute the fix back into the juju community :-)
<timrc> marcoceppi, Just to confirm my lp-propose should be:
<timrc> Source: lp:~timrchavez/charm-helpers/jenkins-slave-fix-slave-relation-joined
<timrc> Target: lp:charm-helpers
<timrc> should result in* rather
<marcoceppi> timrc: is this a fix in charm-helpers or the jenkins charm?
<timrc> marcoceppi, it's a fix to a jenkins charm... when I attempted to propose my branch for merge against the actual charm I got a little hate :(
<timrc> bzr: ERROR: exceptions.Exception: lp:~timrchavez/charm-helpers/jenkins-slave-fix-slave-relation-joined is not mergeable into lp:charms/jenkins-slave
<marcoceppi> timrc: you should push your branch to lp:~timrchavez/charms/precise/jenkins/jenkins-slace-fix-slave-relation-joined
<marcoceppi> timrc: you're trying to merge not only across projects, but across distros
<timrc> marcoceppi, muhaha
<timrc> if I could push to charms/ I would have..
<marcoceppi> timrc: you can
<marcoceppi> timrc: it's charms in your namespace
<marcoceppi> anyone can push to lp:~user/charms/...
<timrc> marcoceppi, Ah, I had some problem... maybe my path was bad
<timrc> marcoceppi, Ah, I attempted bzr push lp:~timrchavez/charms/precise/jenkins-slave-fix-slave-relation
<marcoceppi> timrc: yeah, since charms is a distro, you have to put a branch name in there and jenkins becomes the project name
<timrc> Makes sense.  Thanks for working around my obtuseness and helping me :)
<marcoceppi> timrc: np! once it's all pushed, you can just do `bzr lp-propose lp:charms/jenkins-slave`
<timrc> lazyPower, Okay stumbled my way through :) https://code.launchpad.net/~timrchavez/charms/precise/jenkins-slave/jenkins-slave-fix-slave-relation-joined/+merge/214994
<tvansteenburgh> anyone ever successfully configured sticky sessions on the haproxy charm?
<marcoceppi> tvansteenburgh: people in IS probably have
<tvansteenburgh> marcoceppi: thanks, will check there
<Jeffrey_> Is there anyway to start a service on the juju agent machine? Or is there a way to deploy to a specific machine number in juju status?
<marcoceppi> Jeffrey_: yes, you can use deploy --to
<Jeffrey_> marcocappi: Then I get "Added charm "local:percise/rethinkdb-2" to the environment. ERROR cannot assign unit "rethinkdb/0" to machine 0: series does not match" and the machine type of number 0 is "series: percise"
<marcoceppi> Jeffrey_: is this a local provoder?
<Jeffrey_> Yes from a directory on the juju client computer.
<Jeffrey_> Also if I add a charm it creates a new machine that stays in the pending state. How can I start it?
<marcoceppi> Jeffrey_: the bootstrap node isn't a real cloud VM, it's your actual machine
<marcoceppi> you'll need to add another machine
<Jeffrey_> marcoceppi: Ok so the bootstrap node can only be for the agent?
<marcoceppi> Jeffrey_: only on local provider
<marcoceppi> Jeffrey_: all other real clouds you can deploy --to 0
<Jeffrey_> marcoceppi: I'm sorry I meant I am running a MaaS provider.
<Jeffrey_> I am running from a local repository
<Jeffrey_> Charm is from a local repository
<marcoceppi> Jeffrey_: show me your juju status
<Jeffrey_> marcoceppi:
<Jeffrey_> http://pastebin.ubuntu.com/7227684/
<Jeffrey_> marcoceppi: Even when I just do a normal deploy the new machine that is allocated for the charm doesn't start. Is there any way for force it?
<marcoceppi> Jeffrey_: something else is wrong, it should start up eventually
<Jeffrey_> marcoceppi: This is the status: http://pastebin.ubuntu.com/7227701/
<Jeffrey_> macroceppi: I think I figured it out.
<avoine> I'm thinking about changing some configuration variables for the django charm like: requirements_pip_files
<avoine> in your opinion what would be better:?
<avoine> 1) having a configuration option with a default but ignoring if the results fail
<avoine> 2) a configuration variable with empty default and triggering errors if the file is not there
<marcoceppi> avoine: the latter, in my opinion
<tvansteenburgh> `juju deploy local:trusty/haproxy --repository=.`
<tvansteenburgh> is the --repository=. needed there?
<tvansteenburgh> i'm hacking on haproxy and my code changes aren't getting deployed to new units
<tvansteenburgh> not using --repository and wondering if i need to
<tvansteenburgh> marcoceppi: ^
<marcoceppi> tvansteenburgh: pretty sure you always need to specify a repository
<tvansteenburgh> or else?
<tvansteenburgh> maybe it's pulling from lp everytime and i haven't noticed
<marcoceppi> no, it wont do that
<tvansteenburgh> yeah i dunno what it's doing then
<marcoceppi> tvansteenburgh: you can try with the --debug flag
<marcoceppi> see what's happening
<tvansteenburgh> well --repository=. fixed my immediate prob, so i'm moving on
<arosales> mbruzek: marcoceppi does charm-help-test work for you?
<arosales> on charm-tools 1.2.10 its not giving me the help
<arosales> sorry I meant to state, charm-help test
<arosales> charm help test, also isn't working for me.
<mbruzek> no I get a usage when I try those commands
<arosales> http://paste.ubuntu.com/7228396/
<arosales> I'll file a bug
<arosales> mbruzek: thanks for confirming I was hoping to look at what the valid outputs are for charm tests
<mbruzek> arosales, does "charm test" not work for you?  It appears that the help is broken for me
<arosales> mbruzek: 'charm test' does work for me
<arosales> mbruzek: just that 'charm help test' does not or as the usage suggests 'charm-help test'  also doesn't work
<mbruzek> sure
<mbruzek> That is a bug
<marcoceppi> avoine: charm test -h
<marcoceppi> should give you output
<arosales> https://bugs.launchpad.net/charm-tools/+bug/1305337
<_mup_> Bug #1305337: charm help test does not work on 1.2.10 <Juju Charm Tools:New> <https://launchpad.net/bugs/1305337>
<arosales> marcoceppi that does work
<arosales> perhaps it just the usage that needs updating then
<marcoceppi> arosales: charm help <cmd> doesn't ever work?
<marcoceppi> i guess it could
<arosales> it didn't work for me in the commands I tried it with, list, test, get, etc.
<arosales> cory_fu: fyi http://paste.ubuntu.com/7228419/
<arosales> which I finally got to give me from charm test -h
<arosales> cory_fu: pass, fail, skip, and timout, so were hitting timeout
#juju 2014-04-10
<smoser> waigani, around ?
<waigani> smoser: yep
<smoser> can you answer my query in canonical irc ?
<smoser> waigani, ^
<waigani> smoser: you mean the #canonical channel? I don't see your query sorry?
<smoser> canonical irc. (not freenode).
<caribou> Is there a way to tell which serie to use when deploying to the local provider with "--to=kvm:0" ?
<caribou>  I mean, my kvm instance is running trusty (like my host) but I'm deploying using precise charms
<jam1> caribou: "juju deploy precise/charm --to kvm:0"
<jam1> caribou: generally you define the series by the charm you are deploying
<caribou> jam1: yeah, but the kvm instance still get built on trusty
<caribou> jam1: in the nova-compute charm, it does a lsb_release to get the distro & sees trusty whereas the charm is for precise so it fails
<jam> caribou: I've honestly only heard of this bug: https://bugs.launchpad.net/juju-core/+bug/1302820
<_mup_> Bug #1302820: juju deploy --to lxc:0 cs:trusty/ubuntu creates precise container <landscape> <juju-core:Fix Committed by thumper> <https://launchpad.net/bugs/1302820>
<jam> which should be the opposite of what you are seeing
<caribou> jam: indeed; well I hacked around it by providing trusty/nova-compute and it works
<thumper> same bug
<thumper> fix isn't in a release
<jam> thumper: ah that is that your creating LXC with the same lsb-release as the host
<jam> he is on trusty and wants precise
<jam> original bug was on precise and wanted trusty
<jam> gotcha
<jam> thumper: should we be backporting the fix to 1.18.1 ?
<thumper> jam: probably
<caribou> thumper: I can live with my workaround, I was just curious since that --to kvm:0 is rather new stuff (to me at least)
 * thumper nods
<caribou> right now, I'm stucked on something else I keep getting "Host '192.168.122.154' is not allowed to connect to this MySQL server" when I try to add a relation & both services are on the same machine
<caribou> (mysql & keystone for now, but I get this with other services)
<caribou> if mysql & keystone are on separate machines it works
<thumper> caribou: you are using the local provider?
<caribou> thumper: yes, but my colleague sees the same thing on a maas deployment
<caribou> thumper: he's just opened https://bugs.launchpad.net/charms/+source/mysql/+bug/1305582
<thumper> caribou: do you have the network-bridge in the local config set to 'virbr0' ?
<_mup_> Bug #1305582: relation with mysql fail when mysql and glance are deployed on the same node <mysql (Juju Charms Collection):Confirmed> <https://launchpad.net/bugs/1305582>
<caribou> thumper: yes
<caribou> thumper: but both my machines are LXC container, I only use kvm for the  nova-compute charm
<thumper> ok
<strikov> Hi guys. I'm trying to use juju with my own openstack cloud. Bootstrap finished correctly but when i'm trying to deploy mysql charm I'm getting this: "ERROR error uploading charm: cannot upload charm to provider storage: cannot make Swift control container: failed to create container: juju-123456789" But swift is accessible and bootstrap was able to create this container already and populated it with some files. What I'm doing wrong? Thanks
<jose> lazyPower: ping
<lazyPower> jose: pong
<jose> lazyPower: looks like I fixed it!
<jose> well, kinda
<lazyPower> wooo!
 * lazyPower dances
<jose> the only problem we have is it doesn't migrate apps
<jose> like, people would have to re-enable them manually
<jose> but let's say, data is preserved, I think, the calendar app did preserve data
<jose> lazyPower: ^
<lazyPower> jose: ok. Did you update your MP yet?
<lazyPower> s/yet//
<lazyPower> I can pull it and take a look after hours today and see what the status is and look at how difficult the plugin migration would be
<jose> ok, I'll update the MP asap
<jose> the branch is updated
<lazyPower> jose: no rush
<lazyPower> i wont be able to get to it until after my workday
<cory_fu> I'm getting an error that I don't understand at all when trying to run my amulet test: http://pastebin.ubuntu.com/7231112/ and my test, in case it's relevant: http://pastebin.ubuntu.com/7231117/
<cory_fu> juju status just after shows only the machine and no indication of any units being created.  Unfortunately, since it's LXC, I can't get access to juju log at all
<mbruzek> cory_fu, can you check ~/.juju/local/log/ for the logs
<cory_fu> Well, apparently I can, and that's good to know
<cory_fu> But it doesn't seem to have anything useful in it
<mbruzek> cory_fu Can you patebin all-machines-log.0?
<cory_fu> mbruzek: http://pastebin.ubuntu.com/7231138/
<mbruzek> OK cory_fu I don't see anything in that log.  It looks like a deployer or amulet bug to me.  Does anyone else have insight on the error http://pastebin.ubuntu.com/7231112/
<mbruzek> cory_fu, You can enable higher trace on your juju environment by running a command after bootstrap.  The command only turns up debugging for one boostrap/destroy-environment session, but it might help print out more details.
<mbruzek> juju set-env 'logging-config=<root>=DEBUG;juju.provider=DEBUG'
<marcoceppi> cory_fu: mbruzek https://bugs.launchpad.net/amulet/+bug/1293878
<_mup_> Bug #1293878: Amulet should work with local charms that are not in version control <Amulet:Triaged by marcoceppi> <https://launchpad.net/bugs/1293878>
<mbruzek> oh oops that is a bug I reported
<cory_fu> Yep, that looks like it
<cory_fu> Thanks
<mbruzek> thanks marcoceppi
<mbruzek> cory_fu, if I remember correctly the solution was to add the charm to bzr under my own namespace
<marcoceppi> jsut being in bazaar will fix it, doesn't need to be pushed or anything
<mbruzek> well a workround.
<cory_fu> I guess I have to move my bzr learning phase up to before making sure the test passes.  :-p
<marcoceppi> deployer expects bazaar, so in the future amulet will transparently move non versioned and other versioned charms to bzr
<mbruzek> cory_fu if you have bzr questions send them to marc... me
<cory_fu> :-)
<mbruzek> cory_fu, It was really easy to add it to a personal branch
<cory_fu> Hrm.  Do we need to make and upload new public keys for launchpad because of heartbleed?
<cory_fu> Oh, no, that shouldn't affect ssh
<cory_fu> Hrm.  Then why did bzr start giving me a publickey error
<marcoceppi> cory_fu: just now?
<lazyPower> cory_fu: positive that the current identity is uploaded to launchpad?
<cory_fu> Well, I swear it was working yesterday, but just now I got a publickey error
<lazyPower> cory_fu: and its yoru standard ~/.ssh/id_rsa right?
<cory_fu> Well, it's a different one but I have a Host launchpad.net section in my .ssh/config
<lazyPower> Ok, and your config is owned by your user, with proper permissions?
<cory_fu> Yep
<lazyPower> Interesting. that should be fine
<cory_fu> Hrm.  Maybe I should generate a new pair.  :-(
<lazyPower> You can also try removing the pubkey and resetting it
<lazyPower> i doubt thats it but worth a shot
<cory_fu> Ah.  I needed the full bazaar.launchpad.net on the Host line
<cory_fu> Could swear it worked yesterday.  Maybe because I hadn't done bzr whoami yet
<cory_fu> Or some other config to indicate which account to use
 * lazyPower thumbs up
<cory_fu>     from fastimport.helpers import (
<cory_fu> ImportError: cannot import name single_plural
<cory_fu> o_O
<cory_fu> Is there any workaround for getting "ImportError: cannot import name single_plural" from bzr when trying to use fastimport to import from git?
<cory_fu> mbruzek: ?
<lazyPower> sarnold: ping
<sarnold> hey lazyPower :)
<lazyPower> So, you're a random number in the ether until you 'officially' arrive? as predeciated by your status update at 13:04:25 EST
<sarnold> lazyPower: maybe several confounded random functions -- if my pandaboard hangs, then I arrive -- if my pandaboard doesn't hang, then I'm always here :)
<lazyPower> i need to 1:1 with you at some point about building an in house low power maas cluster sinc eyou've got more micro board experience than I do
<lazyPower> more along the lines of what to look at and what to stay away from
<lazyPower> before i let that statement run away with context
<sarnold> lazyPower: heh, i've just got the one pandaboard, and I think my experience with it is clear to find something else entirely :)
<lazyPower> haha
<lazyPower> Fair enough
<sarnold> lazyPower: back when our buildds were using pandaboards, someone from IS had to poke them just about daily to keep them building :(
<ppetraki> hey do we have a charm that just deploys a base line server? I swear we did
<sarnold> ppetraki: was that 'ubuntu'?
<lazyPower> ppetraki: its the ubuntu charm
<lazyPower> deploys a no frills ubuntu server installation
<ppetraki> sarnold, thanks
<lazyPower> sarnold: also, last night i learned that you cannot colocate a juju-local installation on your maas region controller if you're using bridged ethernet devices without doing some serious voodoo in the juju config
<lazyPower> it tanked networking on my server until i removed the juju-local package :P
<sarnold> lazyPower: hah, yikes
<lazyPower> yeah im like, poking around at really strange configurations
<sarnold> lazyPower: somehow I'm not too surprised, the assumptions of all the different tools involved are pretty strong
<lazyPower> i went from having my maas-master as a virtual machine to using bare metal as the region controller, controlling kvm instances.
<lazyPower> this is all for a blog post about what configurations i found that work, for someone that wants to build an "all encompassing juju lab"
<sarnold> oh nice
<sarnold> some more 'real world' stories of maas and juju use would be pretty cool
<lazyPower> well, i've got 3 special interest groups from CMU that contacted me about running juju workshops
<sarnold> it's either "hey look at this wordpress install" or "yes we have customers with a few hundred or thousands of machines doing this on private clouds and they seem to like it"...
<sarnold> sweet!
<lazyPower> I'll try to get them to publish their experiences to the list so we can reblog and promote their use. The biggest use case i see is a LUG on campus is offering free VM's with openstack
<lazyPower> they want to do the full maas + juju  + openstack path
<sarnold> oooo
<sarnold> man, when I was at school, we had one "linux lab" of machines that were castoffs from the windows labs but worked great for us..
<ppetraki> can we support bundles from git hub?
<ppetraki> err do we?
<lazyPower> sarnold: make ingest faster
<sarnold> lazyPower?
<lazyPower> i'm waiting for a bundle i just published to ingest so i can deploy it using deployer... make ingest run faster!
<lazyPower> i know you're secretly the wizard behind all of this
 * sarnold waves his hands meaningfully
<lazyPower> wooo juju genie powers
<Kupo24z> Having issues with Ceilometer HTTP connection exception: [Errno 111] ECONNREFUSED with a fresh charm install, is there anything special that needs to be done to get keystone to authorize it?
<jose> Kupo24z: have you exposed the service?
<Kupo24z> jose: No, not yet
<jose> Kupo24z: can you try exposing it?
<Kupo24z> Same issue after exposing
<lazyPower> Kupo24z: what environment are you running on?
<lazyPower> and can you reach the instance with juju ssh <unit #>?
<Kupo24z> Yes
<Kupo24z> juju ssh ceilometer/0
<timrc> So the jenkins charm on trusty listens on ipv6 only.  We can can add "-Djava.net.preferIPv4Stack=true" to JAVA_ARGS and restart the server to get it to listen on ipv4 but is this the preferred way of doing this?
<timrc> restart the service*
<Kupo24z> this is on ubuntu 12.04 LTS with openstack-origin: cloud:precise-havana
<lazyPower> timrc: ideally it should be a boolean option in the config and the charm should handle adding that option
<timrc> lazyPower, I can add that.  Sounds like a good idea
<lazyPower> Kupo24z: ok, so it really is with just ceilometer, i'm not 100% familiar with the charm, but I would check the service to ensure its listening on the public address
<lazyPower> i'm willing to bet its only exposed to the private address
<lazyPower> actually
<lazyPower> since you're ssh'd in, try to curl get the address on the private ip vs public ip
<lazyPower> and see if it responds on one or the other
<Kupo24z> this is the only thing (non -sshd) that its listening
<Kupo24z> tcp        0      0 0.0.0.0:8777            0.0.0.0:*               LISTEN      8118/python
<Kupo24z> PID 8118 is /usr/bin/python /usr/bin/ceilometer-api --log-dir=/var/log/ceilometer
<lazyPower> Kupo24z: i've asked in #ubuntu-server, waiting on a response. I've not managed openstack outside of using the horizon dashboard
<Kupo24z> Can someone assist with the ceph charm? are OSD devices physical disks or partitions to use for the ceph storage cluster?
<lazyPower> Kupo24z: block storage devices.
<lazyPower> eg: /dev/sda2
<Kupo24z> Do they need to be unpartitioned disks or anything that is a traditional block device will work?
<lazyPower> Kupo24z: i'm fairly certain it runs a format for you if it does not report as having a filesystem
<Kupo24z> and im assuming all ceph nodes need to have the same block devies avalible if spawning multiple nodes since they all rely on the same config file
<lazyPower> correct, unless you want to deploy named ceph nodes
<mbruzek> Kupo24z, If you use a more recent source with the ceph charm you are able to use a directory as the storage device.
<mbruzek> Kupo24z, I ran into a problem where the block devices were difficult to create, so I set the "source" configuration option to "cloud:precise-updates/havana" which gives you a more recent ceph
<mbruzek> Kupo24z, With that version of ceph I was able to specify another config option 'osd-devices' to a non existing directory "/srv/osd/" and ceph created a block device there.
<Kupo24z> mbruzek: wouldnt that create additional overhead if you are going through the filesystem for ceph storage?
<Kupo24z> eg partition -> ext4 -> ceph vs partition -> ceph
<mbruzek> yes but I thought you were asking how to create devices.  If you already have devices then you can safely ignore my comments.
<lazyPower> mbruzek: thats good to know!
<lazyPower> you told me that last week, however i forgot. WRiting that down.
<seepa> Hello, I'm trying to deploy juju-gui on local-provider (all in a lxc), but when I expose the service, agent-stat-info returns: '(error: error executing "lxc-clone": lxc_container: failed mounting /var/lib/lxc/juju-precise-template/rootfs onto /var/lib/lxc/juju-precise-template/rootfs; lxc_container: Error copying storage; clone failed)'
<seepa> I'm running juju 1.18-0-trusty-amd64
<rick_h_> seepa: you shouldn't need to expose things in the lxc environments. They don't support it
<seepa> rick_h_: oh, I see.
<seepa> rick_h_: The error occurs after juju deploy juju-gui.
<rick_h_> seepa: oh hmm, yea looks like your error is with creating the lxc machine. I'm not sure on that end
<seepa> how can juju even mount /var/lib/lxc/juju-precise-template/rootfs? /var/lib/lxc is owned by root rwx------ ...
<seepa> well it can't, but tries to mount it
<Kupo24z> Seems I cannot destroy a service if the relation is still open. Ive got Ceph as 'life: dying' however it just hangs there, probably because of an existing relation
<Kupo24z> however when I try to remove the relation nothing happens, is there a force?
<jose> Kupo24z: what's the state of the service?
<Kupo24z> jose: http://pastebin.ubuntu.com/7232718/
<lazyPower> Kupo24z: show me the full output of your status
<lazyPower> if you haven't resolved it
<Kupo24z> lazyPower: I just destroyed the environment and started over
<lazyPower> Ok. If a dependent service that it's related to is in an error state
<Kupo24z> Nothing was in error state
<lazyPower> that error will need to be resolved before that service will continue being destroyed
<Kupo24z> It wasnt removing a relation for some reason, no errors at all on juju status
<lazyPower> Ok, if you run into it again ship me the full output listing from juju status and we can investigate from there
<lazyPower> like, ping me. I'm half in half out tonight on IRC.
<lazyPower> jose: did you get that branch pushed?
<jose> lazyPower: yep, and MP updated
<lazyPower> jose: awesome. I'll take a look after i eat dinner.
<jose> enjoy!
<lazyPower> thanks for the quick turn around and effort on that
<jose> I hope it's something that actually works and follows the charm store policy
<lazyPower> If it needs doctoring I'll be happy to doctor it up and submit a MP to your branch
<lazyPower> then we'll poke matt or marco to take a look as confirmation
<jose> awesome then :)
#juju 2014-04-11
<jose> lazyPower: do you have some time to review the charm now?
<lazyPower> going over it as we speak
<jose> my brain is burnt from homework and it needs some relaxing
<jose> cool
<lazyPower> have you started classes already?
<lazyPower> i thought you weren't headed back until next month
<jose> yeah, 8 days ago
<lazyPower> right on
<jose> lazyPower: any updates?
<lazyPower> jose: i'm a little concerned that we are version locking the deployment
<jose> hmm, let me check something and I'll be back in a minute
<jose> (possible answer to that)
<jose> KABOOM!
<jose> owncloud-latest is the same as the latest version, not only for upgrades
<jose> let me fix that part of the charm (or see if I can do it)
<jose> looks like a simple fix
<lazyPower> jose: md5 checking as well, so what i think we need to do is either pull it from the source tree and provide git tag/branch checkout
<lazyPower> or we need to defalt to from source and provide ppa installation options since there is a suse build tree
<lazyPower> this will help "future proof" the charm
<lazyPower> and we can work on the plugin manifests getting disabled from there
<jose> hmm, I was thinking on pulling both owncloud-latest.tar.bz2 and owncloud-latest.tar.bz2.md5 to get the latest version on install
<lazyPower> hmm...
<lazyPower> ok i'll allow it
 * lazyPower does the juju blessing on jose's efforts
<jose> :P
<jose> ok, let's test-deploy
<lazyPower> jose: link me the diff
<lazyPower> i've got a 4.x already populuated with plugins and data ready to test teh migration
<jose> lazyPower: if you do upgrade-charm it should be good to go
<jose> the only move I'm doing from the branch pushed is the install hook
<lazyPower> so you're only changing the package names in the source?
<jose> on the install hook, yes
<lazyPower> ah ok, got it
<lazyPower> just re-merged and i saw the change
<jose> cool!
<lazyPower> jose: have you noticed that after you deploy the unit, and set a username, you're still prompted to create a user on first run?
<jose> lazyPower: yep, I think those variables should be eliminated
<lazyPower> we have to keep them around due to backwords compatibility. There's either a) a way to fix them, or b) we need to udpate the config.yaml to note they are depreciated
<lazyPower> my inclination is towards a)
<jose> ok, let me check that before I do the deploy
<lazyPower> jose: awesome. validated the upgrade is non destructive
<lazyPower> and looking at the plugin manifests, they change from version to version, so the re-enablement of the plugins is a non-issue from what i can see. i'm still nosing about
<lazyPower> and your charm upgrades are compliant with: http://doc.owncloud.org/server/6.0/admin_manual/maintenance/update.html
 * lazyPower thumbs up
<jose> cool
<jose> now I'm checking that username/password thing
<lazyPower> let me try again with a multi-user install, and validate we aren't going to hurt anyone thats got more than one user, since thats the last thing i can think of as an edge case
<jose> lazyPower: did you do the deploy as a standalone instance or with mysql?
<lazyPower> both
<jose> because as far as I can see here, the admin and pass config options only work with mysql, they're called for on the db-relation-changed hook
<lazyPower> right
<lazyPower> its called out in the README
<lazyPower> tbh i had not reviewed the readme this go-around. Thats why it wasn't nacked on the first review.
<lazyPower> so, disregard the instructions above
<jose> ok, no touching user/pass!
<jose> deploying a fresh instance with the charm
<jose> lazyPower: a bit offtopic, do you know how to make 'juju stauts' an alias for 'juju stauts'? :P
<lazyPower> not with the space no, but i have however set an alias as js='juju status'
<jose> juju stauts is a common typo of mine
<lazyPower> i'd just alias it toi something shorter to type
<lazyPower> all my common commands are shortened like that in a ~/.bash_aliases file i version
<jose> yep, js may be a good idea
<lazyPower> allright, i've valided with nfs, mysql, and standalone options from -current to your proposed MP
<lazyPower> everything seems to be in order
<jose> houston, we have a problem
<lazyPower> what'd you find?
<jose> yesterday I found a bug on owncloud itself, the -latest.tar.bz2.md5 hash is not updates, which would cause an endless loop in the md5 check
<jose> we would need to hardcode the md5 until they fix it
<lazyPower> leave it as the orig tarball
<lazyPower> i'll have another charmer check in the morning, and once it's acked i'll merge it
<jose> like 6.0.2 instead of latest?
<lazyPower> yeah, you said 6.0.2 is the same as latest at present
<jose> yep
<jose> I'd need to change that in both install and upgrade-charm then
<lazyPower> you can revert the patch that applies the -latest
<jose> oh, there isn't, I did everything in one shot
<jose> I'll just push
<jose> let me do a final test
<lazyPower> i just left a comment that it looks good to me. I dont want to merge this as its 11:10, and i'm pretty tired. I may have missed something but i checked it with all the configurable relationship options
<lazyPower> i'm confident this is a high quality patch adn will have no problems being merged
<jose> it's good :)
<lazyPower> great work Jose
<jose> thanks lazyPower :)
<lazyPower> allright man, thats me.
<lazyPower> i'm out
<jose> you have a good night!
<gep> !list
<manuel_> hello, i'm trying  to get the LXC Provider to work for me. I have had some trouble with lxc but everything seemed to work, until! i reboot the machine (latop). I then can not connect to my cloud anymore. If i destroy and bootstrap the environment again it works. but, shouldn't that environment survice reboots? The error i get after reboot of Host Machine  http://dpaste.com/1776409/
<will1> hi all, I'm trying to install the Vagrant image with Juju pre-installed : precise-server-cloudimg-amd64-juju-vagrant-disk1.box from https://juju.ubuntu.com/docs/config-vagrant.html but it looks like they're no longer available. Has the link changed?
<manuel_> will1: i was having the same problem. I'm trying the ones for raring now: http://cloud-images.ubuntu.com/vagrant/raring/current/raring-server-cloudimg-amd64-juju-vagrant-disk1.box
<will1> manuel_: I thought I looked in the raring directory. but that's cool, raring will work for me. many thanks, much appreciated :-)
<jcastro> jose, while we finish up the review, you want to do the honors blogging the announcement of the updated owncloud charm?
<jcastro> jose, lazyPower: hey the owncloud guys publish ubuntu packages
<jcastro> that means we could use those instead of the tarballs, so we don't have to update the charm on every owncloud release
<jcastro> I'll file a bug
<lazyPower> jcastro: yeah from the suse build service right?
<lazyPower> i was looking for a PPA
<jcastro> yeah
<jcastro> https://bugs.launchpad.net/charms/+source/owncloud
<jcastro> we can resolve some of these bugs
<jose> jcastro: I'd love to, but I'm at university now, if you give me a couple hours I'll be happy to do it
<jcastro> jose, I mean eventually!
<X-warrior> What could be the reason that a juju status shows all agent-state as down on my machines?
<cjohnston> I'm trying to use juju-local.. my bootstrap works, but then when I try to run juju-deployer it tries to deploy, but I get stuck with juju status looking like http://paste.ubuntu.com/7235656/
<cjohnston> any ideas?
<jose> I'm back home, blogging now
<avoine> cjohnston: it looks like lxc can't starts virtual machines, do you have your cgroup mounted and all depdendencies installed?
<avoine> cjohnston: maybe you could try with this command: sudo lxc-create -t ubuntu
<avoine> to see if lxc is wokring
<avoine> *working
<cjohnston> avoine: I installed juju-local.. I do have other containers that I have created and they work
<gnuoy> cjohnston, is this a trust host deploying precise ?
<cjohnston> gnuoy: yes
<gnuoy> cjohnston, I raised https://bugs.launchpad.net/juju-core/+bug/1306537 this morning
<_mup_> Bug #1306537: LXC provider fails to provision precise instances from a trusty host <juju-core:New> <https://launchpad.net/bugs/1306537>
<cjohnston> gnuoy: ack.. things are working for ev and vila, I guess I'm the unlucky one
<gnuoy> cjohnston, well, if you get a fix I'd love to hear about it
<cjohnston> avoine: I don't see any docs about other deps or cgroups or anything like that?
<gnuoy> cjohnston, have you tried deploying a trusty unit out of interest?
<cjohnston> gnuoy: just the bootstrap
<gnuoy> I can fire up cs:trusty/ubuntu no problem
<cjohnston> let me see if I can get trusty to wrok
<cjohnston> work
<gnuoy> fwiw this is what my lxc env looks like having deployed a trusty and precise service http://pastebin.ubuntu.com/7235731/
<cjohnston> #2 is exactly what mine was looking like
<cjohnston> gnuoy: how long did it take for cs:trusty/ubuntu to come up?
<cjohnston> gnuoy: it came up.. :-/
<gnuoy> pretty much instant
<cjohnston> gnuoy: did things work for you two days ago?
<jose> jcastro, lazyPower: would http://software.opensuse.org/download/package?project=isv:ownCloud:community&package=owncloud work good for replacing the tarball on the owncloud charm?
<gnuoy> cjohnston, I couldn't say exactly when it broke but it was pretty recent
<lazyPower> jose: it could
<lazyPower> and probably should. Offer it as an option
<cjohnston> gnuoy: there were a few lxc related changes that landed yesterday
<lazyPower> to use the PPA or to use Source
<jose> lazyPower: got it!
<lazyPower> there are going to be nuance diferences in the packages
<lazyPower> i believe the ppa will install it to /usr and symlink, i may be wrong.
<cjohnston> gnuoy: sorry.. a few days ago
<jose> I'll check how it works and then modify the charm if it seems appropriate
<jcastro> jose, yeah that's what I was thinking
<jose> cool, I'm deploying an ubuntu instance and will check how it behaves
<lazyPower> Woo, looks like we have changes on the vagrant images
<jcastro> Reminder, charm school in ~60 minutes
<jose> 'how to approve an MP jose is about to submit'
<lazyPower> hey jcastro, i think our charm school is going to need to be cancelled. I've run into a blocker. Do we have another idea we can sub in?
<jcastro> do we have a mac available?
<jcastro> so hey basically, the juju-specific box went away
<jcastro> so our docs don't even link to the right boxes right now (fix is in a PR, just added it)
<jcastro> lazyPower, can you resolve your issue in the next 15 minutes?
<marcoceppi> But those boxes don't have juju installed and bootstrapped already
<lazyPower> jcastro: actively trying to do so
<jcastro> it's installed, just not bootstrapped
<jcastro> marcoceppi, the boxes disappeared, I don't think Ben is working today so I have no idea where they're supposed to be
<lazyPower> i'm getting lxc container errors on this box when i attempt to boot services. this is eminating from both the precise64 and trusty64 box
<lazyPower> let me fetch a 32box and see if they exhibit the same behavior
<jcastro> man, the entire vagrant page in the docs is depending on the juju boxes existing
<jcastro> I wonder how long they have been missing and we didn't notice. :-/
<jcastro> I'd like to punt this to next friday, we need to get these images back from ben
<jcastro> I mean, we could do it with the vanilla image, but we'd end up doing port forwarding config and a bunch of metawork before we even get started
<lazyPower> jcastro: that's not going to be awesome
<lazyPower> jcastro: the containers on precise32 are forever pending... still waiting to see if there's a change
<lazyPower> but its not looking good
<jcastro> ok so let's punt one week, I'd rather not suck.
<lazyPower> i'm game for that idea. We can even do a how to macguyver your own juju dev environment
<lazyPower> which i wouldn't mind doing
<lazyPower> its fun
<lazyPower> ok we have a machine up. the boot time was 8 minutes on this vagrant image with 2gb of ram allocated
<jcastro> hey so, idea
<jcastro> is there a place we can run a test that returns every link in j.u.c/docs that 404's?
<jose> jcastro: just on time before the tweet went off
<jose> for that you mean, every link on that page or also on the subpages?
<jcastro> I mean every link in the docs, all of them.
<jcastro> so that when something moves or breaks, we know about it
<jose> oh, automatically
<jose> I could've used a link checker manually
<lazyPower> jcastro: i have a utility i wrote
<lazyPower> let me fish that up after i'm out of this meeting
<jcastro> yeah it just needs to be automatic and on a production box, not some VPS, etc.
<jcastro> maybe on the same box that builds the docs
<jcastro> evilnickveitch, what do you think?
<lazyPower> jcastro: its a ruby script, so it'll probably get nacked
<jcastro> yup
<jcastro> I'll ask around, maybe someone on webops can recommend something, surely we can't be the only ones with broken URLs
<evilnickveitch> jcastro, yes, I think I mentioned this months ago. There is a lint.py tool in the tools dir that checks for bug refs
<evilnickveitch> ideally we should add it to that
<jcastro> ah! Perfect
<jcastro> I'll file a bug
<evilnickveitch> jcastro, ok
<evilnickveitch> jcastro, cool - I just discovered the footer has a load of broken links
<jcastro> nice!
<mhall119> jcastro: marcoceppi: can either of you help me with swift?
<mhall119> I'm trying to learn how to setup a bucket on canonistack
 * jcastro is dumb wrt. swift
<cory_fu2> I'm getting a connection refused error whenever I try to juju status or juju destroy-environment on my local env
<cory_fu2> Should I manually delete .juju/local and .juju/environments/local.jenv?
<arosales> lazyPower: do you have a pointer to your "clean up local provider files"
<lazyPower> @cory_fu2 first off do a lxc ls
<lazyPower> make sure there are no running containers left behind. The script we wrote was pretty destructive on active environments and didn't segregate against anyone doing work with another platform like docker.
<lazyPower> cory_fu2: aside from that, you manually delete some files and there's an AU post on it let me fish that up
<lazyPower> http://askubuntu.com/questions/403618/how-do-i-clean-up-a-machine-after-using-the-local-provider
<lazyPower> @cory_fu2 ^
<themonk> after update in juju 1.18 and clean local bootstrap i tried to deploy apache2 but geting this error in machine 1 "agent-state-info: '(error: container failed to start)'"
<themonk> marcoceppi: after update in juju 1.18 and clean local bootstrap i tried to deploy apache2 but geting this error in machine 1 "agent-state-info: '(error: container failed to start)'"
<marcoceppi> themonk: 1.18.1 was just released
<marcoceppi> try that and see if it's still broken
<jose> lazyPower: have a second?
<jose> marcoceppi: ping
#juju 2014-04-12
<Kupo24z1> Anyone around familar with the neutron charm?
<Kupo24z1> I keep getting WARNING neutron.db.agentschedulers_db [-] Fail scheduling network  in the controller server.log
<bic2k> Nice work on the 1.18 series. Addressed some critical issues for us @Picatic
<bic2k> wondering if anyone is working on this bug/feature? https://bugs.launchpad.net/juju-core/+bug/1183831
<_mup_> Bug #1183831: unable to specify ec2 availability zone <charmers> <constraints> <ec2-provider> <landscape> <juju-core:Triaged> <https://launchpad.net/bugs/1183831>
#juju 2014-04-13
<jasondotstar> #mongodb
* jose changed the topic of #juju to: Weekly Reviewer: bcsaller || Welcome to Juju! || Docs: http://juju.ubuntu.com/docs || FAQ: http://goo.gl/MsNu4I || Review Queue: http://goo.gl/9yBZuv || Unanswered Questions: http://goo.gl/dNj8CP
* lazyPower changed the topic of #juju to: Welcome to Juju! || Docs: http://juju.ubuntu.com/docs || FAQ: http://goo.gl/MsNu4I || Review Queue: http://goo.gl/9yBZuv || Unanswered Questions: http://goo.gl/dNj8CP
<jose> marcoceppi: did you run that videocast with byobu-classroom?
<jose> or what did you use? seems like a *pretty* useful application
<jose> oh, damn, just got to the end of the video
<ValDuare> hows it going
#juju 2015-04-06
<thumper> stub: ping - you around today?
<skay> ARG google marked some juju emails as spam
<skay> lazyPower: it marked some of your posts as spam. GRAR
<skay> I missed a nice post about Docker
<drbidwell> If I have a local install of "juju-quickstart bundle:~landscape/landscape-dense-maas...", can I add my maas target to it so I can deploy on maas controlled hardware?  If so, where do I find how?
<rick_h_> drbidwell: so I think the trick is you have to use maas tags to target the deployment to those using contraints.
<rick_h_> drbidwell: see https://jujucharms.com/docs/1.20/charms-constraints#maas-constraints would need to be updated in the bundle, so you'd have to edit the bundle file to add those constraints in
<skay> what is the right way to replace a block-storage-broker (it is not a subordinate) in an environment?
<skay> I don't want to tear everything down
<skay> what is the common practice? I'm going to go read the postgresql carm to see what it does when the relation is removed
<skay> before I go through and remove the unit and then add a new one and then relate it
<BradyForM> In order to use juju/juju-gui on a local machine I need to set up lxc networking as a better bridge so it is not using 10.* and 192.*  ???
<rick_h_> BradyForM: you're deploying it to an lxc juju environment?
<rick_h_> BradyForM: not following all the way what you're heading towards is what I'm getting at
<BradyForM> rick_h_, I ran juju-quickstart
<rick_h_> BradyForM: with a lxc juju environment?
<BradyForM> and it says that everything is in a container for local
<rick_h_> so if you do juju status you should see your gui deployed to an environment on a machine there with an IP address you can use to access it.
<BradyForM> with a 10.X.X.X for the juju gui.  when I run juju status
<rick_h_> BradyForM: and quickstart should auto open the gui for you once it's up and running
<BradyForM> But I can not acess it if it is out side my local ?
<rick_h_> BradyForM: oh, outside of your machine elsewhere on the network?
<BradyForM> example I am on 192.X.X.X and the lxc contianer is on 10.X.X.X
<BradyForM> for the Gui ^^
<rick_h_> BradyForM: no, you'll have to configure lxc to pull from your outside network for IP addresses
<lazyPower> BradyForM: you can, but it takes some doing and i wouldn't say its "officially supported" -  http://blog.dasroot.net/making-juju-visible-on-your-lan.html
<rick_h_> BradyForM: or you'll have to proxy or ssh tunnel the gui out
<BradyForM> ahh
<rick_h_> well check that out, lazyPower with even written docs on it woot :)
<lazyPower> BradyForM: the better solution would be to use sshuttle for a quick and dirty VPN
<BradyForM> is there a example of running juju on a VM ?
<BradyForM> on the local
<BradyForM> VM is bridged to my local
<lazyPower> you can, we have a vagrant image for it actually
<rick_h_> lazyPower: do you know the latest on juju and that virutalbox tool folks were working on?
<rick_h_> vagrant, that's it
<lazyPower> https://jujucharms.com/docs/1.20/config-vagrant
<BradyForM> you know for testing purposes
<BradyForM> Oo
<lazyPower> BradyForM: if/when you graduate from local usage and still want to isolate you environment, we haev a docker container that has all the tools for you
<BradyForM> thanks for all your help ! looking at that link and seeing what I can do that that.
<BradyForM> lazyPower, that sounds great
<lazyPower> BradyForM: https://github.com/whitmo/jujubox - due to app armor profile restrictions this will *not* work with the local provider.
<lazyPower> but its worth a bookmark and checking out later in your ventures when paired with a cloud provider like AWS or digital ocean.
<BradyForM> lazyPower, we are thinking about using ec2 thoughts ?
<lazyPower> Its a solid choice. I use EC2 quite a bit myself when testing
<BradyForM> cool
<BradyForM> lazyPower,  rick_h_   that worked like a charm ;)
<lazyPower> Happy its working for you BradyForM o/
<BradyForM>  \o
<rick_h_> BradyForM: woot
<jamessapara> Iâm trying to find a guide on how to recover a juju cluster when one of the state servers ipâs have changed. AWS rebooted one of our three and juju refuses to connect to anything.
<BradyForM> Is there a template to get started writing my own bundles or charms ?
<lazyPower> BradyForM: if you have the 'charm tools' package installed there is a generator
<lazyPower> charm create -h
<BradyForM> lazyPower,  do you know if there is ports for debian jessie ?  for the charm tools that is
<BradyForM> or should I add a ppa and just see if it works
<BradyForM> host is debian jessie
<lazyPower> I would think the ppa would work... but i haven't tested it tbh
<BradyForM> lets give it a shot ! :)
#juju 2015-04-07
<skay> I'm having trouble adding back a relation with python-django and postgresql
<skay> http://paste.ubuntu.com/10755947/
<thumper> marcoceppi_, lazyPower: one of you around?
<thumper> I was doing something last night that seemed to be way too hard for what I was trying to do
<thumper> and I'm wondering if it really is that hard, or whether I should look to push some changes
<thumper> I have a running postgresql unit and was wanting to download the daily db dump to my local machine
<thumper> the db dump has permissions 0600 and owned by the postgresql user
<thumper> and in /var/lib/postgresql/backups by default
<thumper> in order to scp it locally, I first had to log in to the machine and copy it (wanting to leave the original untouched) to the home dir and chown (or chmod) it
<thumper> is there an easier way, or should we defer this until we have an action?
<thumper> we could have an action that is 'get me the latest db dump' right?
<thumper> jw4: ^^ re action?
<rick_h_> thumper: yea, you could do an action, though honestly I'd suggest the action is more 'put this in some public space' like an s3 bucket or the like
<rick_h_> thumper: the other way to go would be a subordinate to do something like inotify and auto copy the db to a known location for you so you can pull it down and auto do it every time a backup is made
<rick_h_> thumper: and then you could do stuff like longer term config in the service like s3 credentials/account and such as well which would be interesting since actions don't have persistent 'remember me' settings really
<thumper> rick_h_: so the current answer is "no, it is kinda icky" ?
<rick_h_> thumper: yep
<rick_h_> thumper: in prodstack they have a convention of where to stick backups and then you ping them and they add it to a script that walks services copying dumps for you basically
<rick_h_> thumper: just over ssh/scp
<rick_h_> thumper: so even in our production it's icky
 * thumper sighs
 * thumper adds a TODO note to work out how to make this less icky
<thumper> on the plus side...
<rick_h_> thumper: storage + actions? :)
<thumper> I managed to grab a backup (dump) of my prod database, and load it into my dev environment running in an lxc container
<thumper> so I can test my charm upgrade with the prod data
<rick_h_> nice
<rick_h_> now you just need a restore action that takes the dump and auto loads it for you :P
<thumper> had to play around with 'pg_restore' a bit, but it was ok
<thumper> yep, that too would be nice
<stub> thumper: We have been using subordinate charms to shuffle the files to their final locations, so it hasn't bitten us.  For a charm, making dump world readable so 'juju scp' works would be fine from a security POV.
<stub> rsync subordinate, stuff-into-swift subordinate, and now the new backup system.
<arosales> marcoceppi_, if you have an opportunity could you kick a test off on https://code.launchpad.net/~cbjchen/charms/trusty/ubuntu/lxc-network-config/+merge/255262
<schkovich> where can i find docs on which ports need to be open to access juju state server?
<schkovich> lets say that my state server is running on machine 0
<schkovich> if i enable ufw on that machine and allow ssh juju ssh 0 fails
<schkovich> while i can still have standard ssh access eg ssh -i .ssh/mykey publiciip.of.machine.0
<lazyPower> schkovich: i dont think those ports are documented - but thats a really good question
<lazyPower> let me poke a core dev and see if i cant run that down for you
<lazyPower> schkovich: it appears all you need to have open is 17070 and 22
<luqas> freyes: hi, you there?
<schkovich> lazyPower: thanks, let me try it :)
<schkovich> lazyPower: tcp and udp?
<lazyPower> i do beleive its just TCP
<lazyPower> but it wont hurt to unblock both protos
<schkovich> lazyPower: opening 17070 for tcp only did the trick. thanks!
<lazyPower> Cheers :)
<freyes> luqas, pong
<luqas> freyes: hi, I was having a look to https://bugs.launchpad.net/juju-deployer/+bug/1383336
<mup> Bug #1383336: TypeError "takes exactly 2 arguments (4 given)" raised while deploying <cts> <juju-deployer:Fix Released by freyes> <https://launchpad.net/bugs/1383336>
<luqas> and I changed the status by mistake
<freyes> luqas, oh, got it, I'll change it back to 'fix committed'
<luqas> cool freyes, thanks a lot and sorry for the trouble
<luqas> btw, do you know in which version of juju-deployer has the fix been committed?
<jrwren> a freshly bootstrapped environment, and I get: juju.rpc server.go:328 error closing code EOF  when I deploy: http://paste.ubuntu.com/10766678/  any recommendations?
<jrwren> bah nevermind trying again works. I'll bet it was typical azure.
<rick_h_> jrwren: yea, bac got that yesterday
<rick_h_> or maybe it was friday when a deploy failed for hime from the charmstore
<bac> yeah, that went away.  never figured it out.
#juju 2015-04-08
<jrandall> I'm hoping someone here might be able to help with a failed `juju upgrade-juju` - I ran it a few hours ago to upgrade from 1.21.1 to 1.22.0, but it failed and now my agent is stuck in an error state
<jrandall> (I also emailed juju@lists.ubuntu.com, but thought I'd try here as well)
<apuimedo> lazyPower: ping
<lazyPower> apuimedo: o/ pong
<apuimedo> lazyPower: what kind of timezone are you in?
<lazyPower> EDT - but you're on my watch list so i get pings sent to my mobile.
<lazyPower> Whats up?
<apuimedo> omg, I'm so sorry about that. ping me when you get up in the morning
<apuimedo> it's just to discuss about the review you sent me ;-)
<lazyPower> No worries mate
<lazyPower> I'll be around in ~ 5 or 6 hours.
<apuimedo> very well :-)
<apuimedo> Enjoy some rest
<mwak> hi
<vila> I can't 'juju status' anymore on a local deployment I made last week-end
<vila> vivid/1.22.0-0ubuntu1 (hmm, upgraded from 1.20 this morning... can this be related ?)
<vila> juju status just hangs
<vila> what can I do to get some better diagnosis ?
<vila> ha
<vila> after a long timeout (~10mins given my previous msg ;) juju status dies with: WARNING discarding API open error: unable to connect to "wss://localhost:17070/environment/65ccd21a-868f-42fe-85b2-72084fb6ceb9/api"
<vila> some question remains though: how do I debug this further ?
<schkovich> vila: do you have a firewall enabled on juju state server?
<lazyPower> apuimedo: pong )
<lazyPower> :)
<vila> schkovich: nope, no firewall
<vila> but I was finally able to boot vivid with systemd instead of upstart and I wonder if that may explain it...
<lazyPower> vila: systemd wasn't supported until 1.23 - here's a mailing list post on it
<lazyPower> https://lists.ubuntu.com/archives/juju/2015-March/005072.html
<vila> finally == I got a fix this morning so I could stop booting with upstart instead of the default systemd
<lazyPower> if you're working with 1.22 - it makes sense that you're running into issues, really sorry about that.
<vila> lazyPower: that's expected when running a dev release isn't it ;-) Thanks for confirming, I already feels much better !
<lazyPower> well 1.23 is the dev release currently :)
<vila> lazyPower: yeah, was referring to vivid but same idea ;)
<lazyPower> oh!
<lazyPower> good call :D
<vila> lazyPower: I was forced to use upstart until today and I didn't realize this could have an impact on juju until I tried 'juju status' ;)
<vila> but my juju deployed lxc container did start and works properly
<vila> lazyPower: so, I'd like to keep running vivid/systemd, I can re-create my local deployment if needed, will I be able to do that with juju 1.23 knowing that I want to deploy on trusty and precise lxc containers ?
<vila> (and I'd prefer to switch back to stable juju asap but that can wait a bit if needed ;)
<lazyPower> vila: yeah, 1.23 should be fine for you to use in dev. I'm using it daily here and its been pretty stable but i'm also on trusty
<lazyPower> so ymmv - but i know that 1.23 in vivid is better supported than 1.22 - and you get the added benefit of the new features enabled by default like juju actions
<lazyPower> upgrading from 1.23 when it lands as stable should be no trouble as well, just juju upgrade-juju
<vila> lazyPower: ack, thanks
<lfforman> good morning
<lfforman> i am new to juju, and i am trying to add an existing DigitalOcean machine to my juju environment, with the add-machine command. my issue is that the ssh port to this server is not the standard port 22, where do I state the ssh port I want to use?
<lazyPower> lfforman: great question, let me spin up a DO environment and inspect
<lazyPower> 1 moment
<lfforman> ok, thanks
<lazyPower> lfforman: this is taking a moment, it appears my node(s) are getting hung up in provisioning on DO
<lazyPower> sorry about the delay
<lfforman> ok
<lfforman> lazyPower: I had to get off my computer for a minute i am back
<lazyPower> ok, i'm reconfiguring the node to accept ssh on a non standard port atm
<lazyPower> finally got my machines out of pending in the DO gui
<lfforman> ok
<lazyPower> it was strange, it doesn't normally give me fits like this :) it must know i'm up to no good
<lfforman> :D
<lazyPower> lfforman: i'm not seeing a clear path forward to do this addition,t he command simply errors out with positional arguments :|
<lazyPower> http://paste.ubuntu.com/10774659/ is what i'm seeing
<lfforman> this is the same result iâve got
<lazyPower> lfforman: https://bugs.launchpad.net/juju-core/+bug/1441749
<mup> Bug #1441749: Add-Machine does not support non-standard ssh port <juju-core:New> <https://launchpad.net/bugs/1441749>
<lfforman> i am going to have lunch, thank you for the help
<lazyPower> you may want to mark that bug as affects you so you can follow along with any new developments.
<lfforman> ok
<lazyPower> sorry about not having good news, cheers mate
<apuimedo> lazyPower: hi. I hope it's not too late
<lazyPower> o/ apuimedo nope not too late.
<apuimedo> good!
<apuimedo> lazyPower: I fixed the markdown
<apuimedo> moved the .venvs to each charm
<lazyPower> yeah :| sorry about that - but you'll thank me when we get this in CI
<lazyPower> having the .venv in ../ was going to cause problems
<apuimedo> lazyPower: I'm still wondering about if I should just give up and have it all in bazaar
<lazyPower> you can still track upstream and vendor into bzr
<lazyPower> i wrote a tool fo rthis
<lazyPower> https://github.com/chuckbutler/git-vendor
<apuimedo> lazyPower: that would come in handy :P
<apuimedo> thanks!
<apuimedo> The icons I'll handle next week
<lazyPower> its relatively new, so if you encounter any bugs with git-vendor bugs are welcome :)
<apuimedo> lazyPower: I wish I had time
<lazyPower> the changelog support is pending, but tag exports are working well, I've been using it on the docker/kubernetes charms
<apuimedo> I'd write a guide on the service framework too
<apuimedo> after having read the code
<apuimedo> to find the small things
<lazyPower> new stuff for that as well - the services framework has some backwords incompatible changes coming that cory_fu will be sending out as soon as the docs are sorted
<apuimedo> heh
<lazyPower> :S
<apuimedo> that sounds like it's going to keep me busy
<lazyPower> are you installing charmehlpers from pip?
<apuimedo> nope
<cory_fu> The changes are not that significant, really
<lazyPower> if you're following the embedded pattern no change is required.
<apuimedo> I gave up on that
<apuimedo> I failed two deployments due to some breaking commit
<lazyPower> ouch
<cory_fu> The docs for the new version are available here, for comparison, if you want a heads-up: http://big-data-charm-helpers.readthedocs.org/en/latest/
<apuimedo> well, a change will be required the next time I want to rebase ;-)
<apuimedo> which I'd like to be in the same 6 months cycle as openstack
<apuimedo> lazypower: about the purpose of the midonet-repository interface
<lazyPower> ah yes, do tell
<apuimedo> it's a configuration provider charm
<apuimedo> basically, the charms that relate to it
<lazyPower> ok, so its essentially an orchestrator for the services, and covers all the charms shared config?
<apuimedo> get the repository and release information
<cory_fu> Here's a side-by-side comparison of the changes: http://i.imgur.com/lR9Z60W.png  I'm hopeful it won't be too onerous
<apuimedo> so that they can render that information as part of their puppet hiera.yaml
<apuimedo> lazyPower: only the repo info for now
<apuimedo> no more shared things
<apuimedo> it saves you from having many subordinate deploying going around
<lazyPower> i understand why you chose that pattern, makes sense to me
<apuimedo> and the rendering is common
<apuimedo> so it's not really code duplication, it's similar to charm-helpers embedding
<lazyPower> i still have 3 of the midonet charms to look at, but everything i saw in the first three was extremely good - just some minor nit cleanups for the store really.
<apuimedo> only that instead of pulling, I push from my git repo into the charms
<lazyPower> have you built a bundle for the midonet stack?
<apuimedo> lazyPower: thanks ;-)
<apuimedo> lazyPower: I'm doing so now
<apuimedo> this week
<lazyPower> bundle + tests will = quick win during final review
<apuimedo> I'm basing it off the openstack bundle
<lazyPower> perfect
<apuimedo> still fighting some minor issues in some hooks though
<lazyPower> thats exactly what I was looking for - to see if you were going to consume the existing openstack bundle, and bolt onto it
<apuimedo> yes, something like that
<lazyPower> https://jujucharms.com/openstack/ <-- this will become a topic page in the next week or so when the UI Engineering team finishes the release
<lazyPower> but i think this is a good place to get started anyway :)
<apuimedo> one thing I wanted to ask. To submit my change to charm-helpers and neutron-api I did it in the next branches
<apuimedo> so that it would be upstreamed
<lazyPower> yep, thats the process - submit against next, the charmhelpers guardians review + release.
<apuimedo> but for deployment in production, how crazy is it to deploy that neutron-api?
<apuimedo> (my changes over next)
<lazyPower> neutron is always the troublemaker for me
<apuimedo> lazyPower: s/me/everybody/
<apuimedo> :P
<lazyPower> ^ that
<apuimedo> cory_fu: thanks for the comparison
<cory_fu> np.  I didn't want you getting scared from what lazyPower said.  :p
<lazyPower> :D its not untrue
<apuimedo> so I think I'll backport the changes to neutron-api for current production
<lazyPower> just call me lazy, master of FUD
<apuimedo> Fun Understanding and Discipline?
<apuimedo> cory_fu: have you considered having the services framework be provided by the agent?
<apuimedo> in a juju namespace
<apuimedo> from juju.helpers import services
<cory_fu> We hadn't, really, no.  Seems like the same could be suggested of most of the things in charmhelpers
<apuimedo> cory_fu: I think it would make quite a bit of sense
<apuimedo> things like hookenv and such
<apuimedo> bundled with agent releases
<apuimedo> on the same cycle
<cory_fu> I don't really disagree.
<apuimedo> well, that's almost an agreement :P
<blr> cory_fu: how's the service framework refactor going?
<cory_fu> I think the main argument against is that Juju is intended to be as language agnostic as possible.  So, bundling specific language bindings with the agent, and more so single language only features like the framework, go against that
<cory_fu> (Though, I'd also like to work toward making the patterns, at least, of the framework more language agnostic, as well)
<cory_fu> blr: Really well.  I think it's just about ready to MP against upstream.  I linked a preview of the docs for the new version above (http://big-data-charm-helpers.readthedocs.org/) as well as a side-by-side comparison image that I'm going to include in my upgrade path doc (http://i.imgur.com/lR9Z60W.png)
<blr> cory_fu: neat, will have a look.
<blr> cory_fu: looks good, 'requires', 'callback' and 'cleanup' are clearer.
<blr> cory_fu: partial is new presumably?
<blr> oh that's from functools, nvm :)
<cory_fu> partial is built-in, part of functools
<cory_fu> :)
<blr> cory_fu: are you working on systemd support yet?
<lazyPower> apuimedo: the takeaway is you're almost ready for teh final review? like in the mid to late 90% range?
<cory_fu> Nope, but I think that would just involve a change to host.service_*
<blr> cory_fu: will that release break the api? By the looks of it, upgrading our charms should be quite easy however.
<cory_fu> The new version of the framework will definitely not be entirely backwards compatible, but, as you said, I don't expect the changes required to be too bad.  If deemed necessary, we could possibly even come up with a compatibility layer to ease the transition, but I'm reluctant to do that unless really necessary.
<blr> cory_fu: ok, will convey that to ci, they've been using the services framework too.
<cory_fu> Thanks.  Do let me know if a compat layer is something that we should look into
<blr> our charms are reasonably simple, but I can't speak for others.
<apuimedo> lazyPower: :-)
<apuimedo> I'll be as soon as my bundle works
<apuimedo> I'd say 90%
<apuimedo> probably
<apuimedo> lazyPower: not for the amulet part, unfortunately
<apuimedo> so the 90% is over-optimistic because of that
<lazyPower> Would it be helpful if we sta down and had a quick hour long charm school over amulet tests?
<lazyPower> we can riff and start writing tests while pairing
<apuimedo> lazyPower: that sounds really good for next week. I'll talk with my team tomorrow morning CEST and I'll shoot you an email ;-)
<lazyPower> Sound good, i know for sure theres no way i can do it on Monday - but later on next week it shouldn't be an issue
<apuimedo> ;-)
<cory_fu> Alright, I have to head out.  Got to finish packing to head to Germany tomorrow.  :)
<apuimedo> cory_fu: have a nice trip
#juju 2015-04-09
<apuimedo> gnuoy: in bundles, how do you tell it to use a local charm
<apuimedo> local:path/to/charm/dir ?
<rick_h_> apuimedo: yes, and then you have to use the juju-deployer to deploy the bundle with a JUJU_REPOSITORY env var set
<apuimedo> cool, thanks ;-)
<stub> http://reports.vapour.ws/all-bundle-and-charm-results/charm-bundle-test-11139-results/charm/charm-testing-azure/1 indicates a problem in the new ci environment, which we had seem in the old one too.
<stub> The exception is virtualenv being run with a python3 interpreter, but for some insane reason python2.7 libraries ending up in the path.
<stub> http://reports.vapour.ws/all-bundle-and-charm-results/charm-bundle-test-parent-207/charm/charm-testing-azure/1 shows it getting past that point fine last week.
<apuimedo> rick_h_: can you set 'expose: true' in a bundle?
<rick_h_> apuimedo: definitely
<apuimedo> rick_h_: like this http://paste.ubuntu.com/10782512/ ?
<rick_h_> apuimedo: +1
<apuimedo> thanks ;-)
<tvan-afk> stub: http://reports.vapour.ws/charm-test-details/charm-bundle-test-parent-215
<tvansteenburgh> stub: i think you were looking at the wrong results?
<tvansteenburgh> lxc, joyent, aws still running
<apuimedo> rick_h_: does it have any effect to have ntp with num_units=0 like in the openstack bundle?
<rick_h_> apuimedo: hmm, is that a subordinate?
<rick_h_> apuimedo: I think juju will not like having a subordinate with a num-units = 0 since it has to be on the parent machine
<rick_h_> apuimedo: but if it's not then I don't think it'll care
<apuimedo> rick_h_: as in "it will have no effect" ?
<rick_h_> apuimedo: it will add the service to the environment, so the charm will be there, but it will not be deployed anywhere
<rick_h_> apuimedo: so it'll take up no machines/etc
<apuimedo> rick_h_: my question is, will it make any difference that the charm is there
<rick_h_> apuimedo: it's a fine line, but it does have an effect in that juju will fetch the charm down from the charmstore and add it to it's 'database'
<apuimedo> and relations are added to it?
<rick_h_> apuimedo: yes, if there are relations in the bundle, juju will know 'if this thing has any units it needs to be related'
<apuimedo> but as long as there are no units, no effect, right?
<rick_h_> right
<apuimedo> cool. Thanks
<jcastro> evilnick, the instructions for the docs don't take into account the new multiversion stuff
<jcastro> so as a result I have to cherry pick to each doc version
<evilnick> jcastro, yes, I know. it is a pain at the moment, but it is okay, because I am electing myself chief of backports
<jcastro> excellent
<jcastro> I don't mind doing it if I would have known to do it in 1.18 and then moving forward is easy
<evilnick> jcastro, which isn't to say i don't trust anyone else to do it properly of course
<jcastro> but marco neglected to tell me all of these things until after I had commited
<evilnick> yeah, it sucks going back
<evilnick> don't worry, i will fix it
<evilnick> jcastro, I will add something to the README also
<drbidwell> With a MAAS I installed landscape-dense-maas with "juju-quickstart --no-browser bundle:~landscape/landscape-dense-maas/landscape-dense-maas" and then tried to install openstack with "juju quickstart bundle:openstack/openstack".  It complained that some of the services for openstack were conflicting with the services of landscape-dense-maas.  What is the right way to start the opentack install and how do it give it my config.yaml to use?
<lazyPower> drbidwell: this is not an uncommon problem - thanks for bringing this up. I have a slight alternative you could try that doesn't involve quickstart
<lazyPower> we have a python tool called juju-deployer, and its apt-get isntallable as 'apt-get install juju-deployer' - it will see the difference in topology and ammend the deployment command to leverage whats already in the environment.
<lazyPower> juju-deployer bundle:openstack/openstack-base   should do whats right, but I'll bring this up with rick_h_ and see if there isn't something we can do here to ease that pain of duplicated services when using quickstart
<drbidwell> lazyPower: thanks.  I will try it.  Can I amend the config (like add more disks or ceph/ceph-osd) with juju-deployer?
<lazyPower> you would need to modify the bundle i think, as thats all config based with ceph i do beleive.
<jcastro> this has been a longstanding bug
<jcastro> if you have a bundle that wants to deploy "mysql" and you have "mysql" deployed, you need to edit the bundle to something else
<jcastro> this usually means you can't deploy multiple bundles in the same environment
<jcastro> because everyone calls their bundle databases "mysql" or "postgres" instead of something unique
<drbidwell> I have downloaded the openstack.yaml from the openstack charm and edited it.  Will deployer take this yaml file?
<jcastro> juju-deployer -c thatfile.yaml
<drbidwell> Wonderful!
<drbidwell> Can I add placement constraints to thatfile.yaml?
<lazyPower> sure can
<drbidwell> What is the syntax?
<lazyPower> drbidwell: are you looking for a placement directive (to colocate) or need to set machine constraints such as 2GB of memory?
<drbidwell> such as 2GB of memory
<lazyPower> ok let me find either an example or the official docs - i dont recall right off teh top of my head
<drbidwell> Actually I will need both as I only have 5 machine to allocate to my openstack at the moment
<lazyPower> drbidwell: ok the placement directive is
<drbidwell> I assume that if I run out of physical machines it can start using lxc in machines that meet the requirements
<lazyPower> to: service
<lazyPower> it will not, it will add them to teh topology and they will sit in pending until your elastic cloud can satisfy the machine requirements
<lazyPower> if you need to push it to a lxc container, the syntax is
<lazyPower> to: lxc:# (# being the machine id, or service identifier - eg lxc:1 or lxc:nova-gateway)
<lazyPower> drbidwell: here's an example bundle with colocation placement (no lxc) https://gist.github.com/b64070bc83d3e4725d25
<drbidwell> Thanks
<lazyPower> still looking for the machine constraints, i was certain we had some big data bundles with that embedded
<lazyPower> i'm not finding one but i'm fairly certain its in the format of adding a key 'constraints:' to the service definition and key=value pairs afterwords
<lazyPower> constraints: mem=2G
<lazyPower> use array notation for multiple constraints
<drbidwell> I will try it.  I have 2 types of machines for my test lab, disk/controllers and compute servers with differing numbers of cores.  Should be easy.
<lazyPower> evilnick: before I file this bug against the docs (and follow it with a PR) is there anything I need to do wrt multi versions that jcastro mentioned above? i'm going to update the bundle docs w/ constraints listing
<evilnick> lazyPower, it depends what the bug is :)
<lazyPower> https://github.com/juju/docs/issues/341
<evilnick> the basic rule of thumb is, target your change against the *earliest* version that needs changing
<evilnick> and make your PR against that
<lazyPower> ok, i'm fairly certain thats 1.18+
<evilnick> lazyPower, it is easier to pull the changes forward
<lazyPower> so just proposing against master would only make more work for you.
<lazyPower> got it
<evilnick> yes, it looks like it
<lazyPower> ok, should be g2g if the content is approved - https://github.com/juju/docs/pull/342
#juju 2015-04-10
<jrwren> oh awesome. rsync -e 'juju ssh' works.
<jrandall_> this may be a stupid question, but if I upgrade an openstack nova-compute charm while the hypervisor is running instances, what will happen to the running instances?
<jcastro> lazyPower, can you fix your email metadata here: https://api.jujucharms.com/charmstore/v4/trusty/shoutcast-1/archive/metadata.yaml
<stub> Can someone with a unit on a public cloud do 'juju run --unit=foo/0 unit-get --format=json public-address' for me? I need to know if that is a dns name, ip address, or something else.
<stub> Pretty certain it will be the ip address, which means something else is the problem (but still worth checking)
<tvansteenburgh> > juju run --unit=ubuntu/0 'unit-get --format=json public-address'
<tvansteenburgh> "54.198.21.204"
<tvansteenburgh> stub ^
<stub> ta.
<stub> Do I need to do anything before services can use the public-address ? I've got a java app failing to bind to the public address, but other parts of it are happily binding to the private address.
<stub> It might be before the port-open is called
<mbruzek> stub: yeah the public clouds are all behind firewalls
<mbruzek> stub: the public-address is not available until open-port is called
<stub> Hmm... so my services framework charm is starting the configured service in the install hook,  before the start hook is called.
<Spads> stub: yeah, the services framework doesn't care which hook it actually is, so long as all conditions are met
<Spads> stub: the new charm helpers framework will be a bit more explicit and less magical about start/stop, though
<stub> Nope, my fault. The default 'start' option in a service definition is [host.service_start, services.open_ports], and I overrode it. So my ports never got opened, and things explode the first time my public address != private address
<stub> Although looking at that default, it should be the other way around if this is indeed the problem.
<Spads> I am aha
<Spads> er, aha!
<stub> Will will eventually re-implement Ansible.
<lazyPower> jcastro: fix released, thanks for catching that
<jcastro> <3
<stub> tvansteenburgh, mbruzek: Can either of you kick off the lp:~stub/charms/trusty/cassandra/spike tests again? I think I've sorted things, and local tests are ok.
<mbruzek> stub: sure
<mbruzek> stub kicked
<mbruzek> stub: they are all the active processes here: http://juju-ci.vapour.ws:8080/
<stub> ta
<mbruzek> stub, I just kicked off tests for lazyPower but they should be queued behind your.
<apuimedo> lazyPower: is it possible to deploy lxc services that have to talk to each other on different hosts?
<lazyPower> apuimedo: you'll need to do some SDN
<lazyPower> apuimedo: http://blog.dasroot.net/container-networking-with-flannel.html
<apuimedo> SDN? What is that :P
<apuimedo> xD
<mbruzek> software defined network
<apuimedo> I was joking. I work at Midokura (we provide SDN)
<lazyPower> apuimedo: baseline answer without sdn: Nope - host only communication :)
<mbruzek> I was just reminding myself.
<apuimedo> yeah, that's what I guessed ;-)
<lazyPower> apuimedo: baseline answer with midonet: Network all teh things!
<apuimedo> :P
<apuimedo> but who will deploy the MidoNet that is used by Juju to deploy MidoNet
<apuimedo> it's turtles all the way down
<lazyPower> thats just it, we're reconfiguring the LXC bridge with the flannel charm
<lazyPower> the thing to be aware of, if you're reconfiguring LXC - you need to config that software bridge *before* the service is up
<apuimedo> lazyPower: do you use that config file for defining the interfaces for lxc like nova-flex does?
<lazyPower> we've targeted 2 network devices - lxcbr0 and docker0 - we dont support other networking changes in those charms.
<lazyPower> the rest is leveraging default routes to pick the proper device to send the data OTW
<apuimedo> good
<lazyPower> mbruzek: when you're done fixing up /42 - i have /20 on the kubes charm rdy for rev - https://github.com/whitmo/charm-kubernetes/pull/20
<dpb1> anyone know how to silence the bootstrap on maas from this "ARNING no architecture was specified, acquiring an arbitrary node"
<dpb1> *WARNING
#juju 2015-04-12
<AskUbuntu> Juju - Wordpress hook failed : install | http://askubuntu.com/q/608238
 * Zetas waves to everyone
 * Zetas waves to lazyPower
<lazyPower> o/ Zetas
<AskUbuntu> Juju - agent-state stuck on allocating | http://askubuntu.com/q/608544
#juju 2016-04-11
<rohit_> Hi everyone.. another newbie issue.. I seem to be stuck on this bug - https://bugs.launchpad.net/juju-core/+bug/1566420
<mup> Bug #1566420: lxd doesn't provision instances on first bootstrap in new xenial image <lxd> <juju-core:Triaged> <https://launchpad.net/bugs/1566420>
<rohit_> any trick to get over this issue before it's fixed?
<cherylj> rohit_: I'm kinda glad it's not just me that ran into it :)
<cherylj> rohit_:  what I did was bootstrapped another lxd controller (just give it a new controller name)
<rohit_> trying it now
<cherylj> rohit_: you can clean up the original controller in the following hacky way:
<cherylj> 1 - get the instance ID / container name.
<cherylj> (you can do a juju status -m <first controller name>:admin and see the ID there)
<cherylj> 2 - delete the controller using the lxd commands (lxc stop, then lxc delete)
<cherylj> 3 - juju kill-controller <first controller name>
<rohit_> I actually created a new controller before killing the old one
<rohit_> and now all juju commands are failing with "no space left on device"
<cherylj> gah, I was running into that on a small instance I had in AWS
<cherylj> I had to clean up space in /tmp, then lxc stop / lxc delete
<bradm> I'm having a weird error with swift-proxy and the latest next charms as of now - http://pastebin.ubuntu.com/15752193/.  Anyone seen anything like it before?
<freak__> hi everyone
<freak__> need help regarding juju....i want to know 2 things
<freak__> first where we define the pool of IP's which eventually gets allocated to juju services as public address when we deploy them
<freak__>  secondly in my setup when i deployed mysql it got public address as node0.maas and port is allocated to it something like 3306
<freak__>  but when i deployed wordpress it again got public address node0.maas
<freak__> but no port is allocated to it
<jamespage> gnuoy, morning
<gnuoy> morning jamespage
<gnuoy> jamespage,  got a sec for https://review.openstack.org/#/c/301171/ this morning ?
<jamespage> gnuoy, lgtm
<jamespage> gnuoy, I have one more network space review to stack up - I missed neutron-gateway
<jamespage> gnuoy, ok so i can't land https://review.openstack.org/303329 until https://review.openstack.org/#/c/303321/ lands
<jamespage> as I need nova-cc to stop doing the migrations for neutron so that neutron-api can do them...
<jamespage> gnuoy, could you have a look at https://review.openstack.org/#/c/303321/ wolsom already +1'ed but needs a full review
<gnuoy> ack, yep
<jamespage> gnuoy, I'm looking through the queue of features up still - any particular priority from your perspective?
<gnuoy> jamespage, nope
<jamespage> gnuoy, do you have cycles for https://review.openstack.org/#/q/topic:network-spaces ?
<gnuoy> jamespage, I'm looking at Bug #1518771 , I think we need that fixed asap, particularly given the dpdk work. The soultion I propose is to remove the fstab entry and instead write KVM_HUGEPAGES=1 to /etc/default/qemu-kvm and restart the qemu-kvm service. Does that sound sane ? (I still need to check on systemd support for that approach)
<mup> Bug #1518771: nova-compute hugepages breaks the boot process <openstack> <upstart :New> <nova-compute (Juju Charms Collection):Confirmed for gnuoy> <https://launchpad.net/bugs/1518771>
<jamespage> gnuoy, I'm not sure exactly what your proposed solution does tbh?
<jamespage> gnuoy, ok just read /dev/default/qemu-kvm
<gnuoy> jamespage, http://paste.ubuntu.com/15754379/
<gnuoy> thats what the init script runs
<jamespage> gnuoy, that looks like a reasonable solution
<gnuoy> jamespage, I do have cycles for those review although I don't know ceph at all so usually try and leave ceph reviews to someone with more expertise.
<jamespage> gnuoy, ok if you could pickup the neutron-gateway one that would be appreciated...
<gnuoy> jamespage, if I was to create a branch with that solution would it be easy for you to test dpdk is still happy?
<jamespage> gnuoy, not right now as I gave the dpdk machine back to rbasak
<gnuoy> jamespage, ok
<jamespage> gnuoy, my only concern would be that ovs might race the qemu-kvm startup on boot, resulting in a non-running ovs with dpdk enabled.
<gnuoy> jamespage, the option is to create an upstart script and hook into the /run mounted event eg "start on mounted MOUNTPOINT=/run"
<gnuoy> that might still race though I guess
<gnuoy> jamespage, those ceph changes are trivial enough for me to review
<jamespage> gnuoy, ta
<A-Kaser> Hi
<freak__> need help regarding wordpress in juju
<freak__> i have deployed mysql and wordpress
<freak__> and exposed only wordpress
<freak__> here is my juju status http://paste.ubuntu.com/15754539/
<freak__> i cannot access wordpress from browser
<freak__> and also port is not allocated to wordpress
<freak__>  
<rbasak> jamespage: I've not used it yet, but probably will want it this afternoon or tomorrow.
<gnuoy> jamespage, the only solution I can think of is to amend the ovs upstart script to contain similair logic to qemu-kvm to mount hugepages if theres a variable set in /etc/default/openvswitch-switch
<jamespage> gnuoy, hmm
 * jamespage thinks
<gnuoy> jamespage, I've just seen mention that there is hugepage config in /etc/dpdk/dpdk.conf , I assume thats not to do with mounting them in the first place
 * gnuoy goes to check
<jamespage> gnuoy, that's part of the dpdk package itself...
<jamespage> gnuoy, you might not have that installed depending ...
<gnuoy> jamespage, right, but I think amending  /dev/default/qemu-kvm covers the non-dpdk use-case. The ovs with dpdk use case may be fixed with /etc/dpdk/dpdk.conf
<jamespage> gnuoy, agreed on the non-dpdk use-case
<jamespage> gnuoy, tbh I think that most deployments will set hugepages via kernel boot ops
<jamespage> gnuoy, at which point systemd mounts them on /dev/hugepages* anyway
<gnuoy> jamespage, does upstart ?
<jamespage> gnuoy, no
<jamespage> gnuoy, but I don't really care about dpdk on anything other than 16.04 tbh
<jamespage> gnuoy, am I allows to drop the apache2/keystone check for amulet tests?
<jamespage> gnuoy, I just tried to fixup swift for 2.7.0 and both charms fail on that check now...
<gnuoy> jamespage, ok, then I propose that I make the change to use /etc/default/qemu-kvm as its better thaan where we are now
<gnuoy> jamespage, yes
<jamespage> gnuoy, well I fixed them anyway
<jamespage> gnuoy, I nacked https://review.openstack.org/#/c/299363/ on the basis of our 'MAAS should do this' conversation...
<jamespage> gnuoy, https://review.openstack.org/#/q/topic:keystone-apache+status:open both ready to go
<jamespage> gnuoy, and https://review.openstack.org/#/q/topic:swift-2.7.0 which are based on those
<jamespage> gnuoy, also I think I've introduced a bug with the neutron pullout
<jamespage> working that now
<gnuoy> omlette eggs something something
<jamespage> gnuoy, https://review.openstack.org/#/c/304034/
<jamespage> fairly minor - the NeutronAPIContext was added conditionally before, but I think its OK to add unconditionally now
<jamespage> gnuoy, tbh if that's the only thing I got wrong, then I'm pretty happy :-)
<A-Kaser> Hi
<marcoceppi> o/
<A-Kaser> cory_fu: me
<cory_fu> A-Kaser: http://interfaces.juju.solutions/
<gnuoy> jamespage, got a sec for https://review.openstack.org/#/c/304097/ /
<gnuoy> ?
<jamespage> gnuoy, already on it
<gnuoy> \o/ ta
<jamespage> gnuoy, do we care about disabling hugepages?
<gnuoy> jamespage, it'd be nice I guess, why do you ask?
<jamespage> gnuoy, we only write the qemu config when hugepages is enabled...
<jamespage> well we might write it under a write_all
<jamespage> but..
<jamespage> gnuoy, oh wait - there is a write_all at the bootom of config-changed...
<jamespage> I'll comment appropriatly
<jamespage> gnuoy, two nits only
<A-Kaser> cory_fu: https://jujucharms.com/u/frbayart/
<gnuoy> jamespage, ta
<jamespage> gnuoy, note to self - remember that the git->bzr sync is not that regular...
<petevg> Hi, all. I've got another newb question: what's the best practice when it comes to storing passwords and other config data that a charm has generated. Is there a standard place to put stuff?
<petevg> I ask because I'm writing a couch charm, and couch actually hashes admin passwords that you drop into its config files.
<petevg> ... there isn't a good way to look and see what juju has generated, which makes it hard to login as the couch admin and test to make sure that things work :-)
<marcoceppi> petevg: in the MySQL charm we store the root password in someting like /var/lib/mysql/mysql.passwd as 600 root.root
<cory_fu> A-Kaser: https://pythonhosted.org/amulet/
<marcoceppi> petevg: it's not ideal, but it the idea that only root can read it and it's in a place that makes sense for the service so the charm hooks can see them
<petevg> marcoceppi: that sounds sensible, even if it isn't ideal. :-) I can do a similar thing in /etc/couch. Thank you.
<cory_fu> A-Kaser: https://github.com/juju-solutions/bundletester
<cory_fu> A-Kaser: https://hub.docker.com/r/jujusolutions/charmbox/
<cory_fu> A-Kaser: https://jujucharms.com/docs/devel/developer-getting-started
<cory_fu> A-Kaser: https://jujucharms.com/docs/devel/authors-charm-store
<cory_fu> A-Kaser: http://review.juju.solutions/
<cory_fu> A-Kaser: https://jujucharms.com/docs/devel/authors-charm-store#recommended-charms
<cory_fu> A-Kaser: https://jujucharms.com/docs/devel/reference-charm-hooks#stop
<cory_fu> A-Kaser: https://jujucharms.com/big-data
<stub> petevg: Since you probably need the password to hand out to clients, you can also store it on the relation. The PostgreSQL charm stores the replication password in the leadership settings (which is how peer units retrieve it), and stores client passwords on the client relations.
<petevg> stub: nice! I will look into that. Thank you.
<A-Kaser> cory_fu: https://jujucharms.com/u/tads2015dataart/mesos-slave/trusty/0/
<cory_fu> https://code.launchpad.net/~dataart.telco/charms/trusty/mesos-slave/trunk
<cory_fu> A-Kaser: https://jujucharms.com/u/dataart.telco/mesos-slave/trusty/
<aisrael> marcoceppi: What needs to be installed to get `charm login` and `charm publish`?
<rick_h_>  aisrael charm from the devel ppa
<aisrael> rick_h_: thanks. I think I've got marco's ppa added and maybe that's screwing me up
<aisrael> Hm. Maybe not.
<A-Kaser> cory_fu: thx a lot
<aisrael> rick_h_: Got it working, thanks!
<cory_fu> A-Kaser: No problem.  Glad I could be of help.
<jamespage> gnuoy, I think this is nearly ready for landing - https://review.openstack.org/#/c/303329/
<jamespage> I've submitted a recheck-full
<cmagina> cory_fu: when will that hadoop_extra_.... fix land in the charms?
<cory_fu> cmagina: I will cut a release when I finish this meeting
<cmagina> cory_fu: awesome, thanks much
<cory_fu> cmagina: Actually, I'm releasing it now.  6.4.3 should be available in a few seconds
<cmagina> cory_fu: cool, i'll get to testing that :)
<marcoceppi> aisrael: what does `charm version` say?
<aisrael> charm 2:2.1.0-0~ubuntu16.04.1~ppa0
<aisrael> charm-tools 2.1.2
<marcoceppi> aisrael: huh
<marcoceppi> aisrael: there's one more thing to tru
<marcoceppi> aisrael: ppa:marcoceppi/xenial-chopper
<marcoceppi> aisrael: add that and try again
 * aisrael gets to the choppa
<marcoceppi> aisrael: there's a 2.1.0-1 which is slightly newer
<marcoceppi> aisrael: it's basically latest master
<aisrael> marcoceppi: same error. I'll file a bug.
<marcoceppi> aisrael: ack, ta, it might be a server side issue
<mbruzek> cory_fu: ping
<cory_fu> mbruzek: What's up?
<mbruzek> cory_fu: getting a charm build error with the basic layer, and I checked there is no basic layer on my local filesystem. http://paste.ubuntu.com/15761546/
<cory_fu> mbruzek: Can you link me to your layer's metadata.yaml?
<mbruzek> cory_fu: I don't have 22 lines in my metadata.yaml nor do I have an assembled charm with that many lines
<mbruzek> https://github.com/mbruzek/layer-k8s/blob/master/metadata.yaml
<cory_fu> mbruzek: I just cloned that layer and built it without error
<mbruzek> hrmm.
<mbruzek> cory_fu: can you pull eddy-master branch and try it again?
<mbruzek> charm 2:2.1.0-0~ubuntu16.04.1~ppa0
<mbruzek> charm-tools 2.1.2
<mbruzek> cory_fu: that is the version of charm tools I am using
<cory_fu> mbruzek: Also builds w/o error
<cory_fu> Same version.  (I'm building this in charmbox)
<cory_fu> charmbox:devel
<mbruzek> hrmm
<mbruzek> I can reproduce this failure... I don't see a local layer-basic, or basic directory in my layers directory
<marcoceppi> mbruzek: hangouts to look?
<marcoceppi> mbruzek: rather
<cory_fu> mbruzek: Why are you assuming the issue is with the basic layer?  Do you have local copies of docker, flannel, tls, or kubernetes?
<marcoceppi> mbruzek: build with -L debug and paste the output
<cory_fu> marcoceppi: https://paste.ubuntu.com/15761546/
<cory_fu> It looks like it pulls in flannel and basic from remote, but you could have local versions of the other layers
<mbruzek> cory_fu: marcoceppi: That pastebin was with -L DEBUG
<marcoceppi> mbruzek: it seems to be failing on layer:docker
<marcoceppi> /home/mbruzek/workspace/layers/docker/metadata.yaml
<marcoceppi> that layer is b0rk3d
<mbruzek> ah there it is!
<mbruzek> When I read the output I thought it was in the basic layer, where do you see the docker layer?
<marcoceppi> mbruzek: the end of the traceback
<marcoceppi> mbruzek: the last three lines
<mbruzek> I see
<mbruzek> sorry marcoceppi and cory_fu
<marcoceppi> mbruzek: file a bug, we should have better error handling
<mbruzek> marcoceppi: well it does actually have the file in question there
<cory_fu> +1  that error handling sucks and should be improved
<mbruzek> I just didn't see it
<marcoceppi> mbruzek: yea, but it should just say "There is malformed yaml in FILE"
<marcoceppi> mbruzek: that way you don't bleed while looking at output
<marcoceppi> mbruzek: 2.1.3 for sure
<mbruzek> maybe I can work on this one since I signaled a false alarm
<lazyPower> interesting, when the charm layer defines series in metadata, it changes the destination directory to builds/$charm ?
<marcoceppi> lazyPower: it should be builds/output
<marcoceppi> lazyPower: err
<marcoceppi> yes
<cory_fu> mbruzek: If you do work on it, make sure you add docstrings.  ;)
<lazyPower> interesting... looks like i need to add another volume mount to charmbox then
<mbruzek> cory_fu: Oh there will be a plethora of comments, trust me
<lazyPower> ta for the info ;)
<marcoceppi> lazyPower:  https://github.com/juju/charm-tools/issues/115
<marcoceppi> lazyPower: ah, yeah
<aisrael> stub: You around?
<gnuoy> jamespage, https://review.openstack.org/#/c/304097/ has passed a full amulet if you have a sec
<jamespage> gnuoy, ditto - https://review.openstack.org/#/c/303329/
<gnuoy> jamespage, amulet is still running on your mp
<jamespage> gnuoy, oh yes - that was just the smoke from my removal of .unit-state.db
<jamespage> gnuoy, hugepages: DONE
<gnuoy> jamespage, fantastic, thanks
<firl> Is it easily possible to do a new install with mitaka and juju?
<lazyPower> firl o/
<firl> ?
<lazyPower> firl i'm not sure what you're asking about "easy to do a new install"
<lazyPower> are you asking if you can deploy mitaka with juju?
<firl> I havenât seen any mailings / docs on doing a mitaka install ( which maas image  ) any prebuilt etc
<firl> If I should just stick with trusty-mitaka ( if it exists )
<lazyPower> ah, the /next branches of the charms target mitaka. and i'm fairly certain those are being landed as stable as we speak
<lazyPower> beisner ddellav thedac - am i correct in the above statement?
<thedac> lazyPower: they are being tested now but will land in stable in a couple weeks
<lazyPower> ah ok, i thought that push was this week. my b.
<firl> Gotcha, is there a place to get the latest info on this? ( just this channel in a couple weeks? )
<thedac> firl: we will post to the juju mailing list when we release
<firl> thedac: thanks! do you know if there will be a sample bundle, or if heat/ceilometer will easily be installed also ?
<gnuoy> thedac, beisner would you mind keeping an eye on these https://review.openstack.org/#/q/topic:enable-xenial-mitaka-amulet and landing them if they pass ?
<thedac> firl: The next bundle should already exist. Let me find that for you. I know ceilometer has had some feature split with adoh I am not sure that will be ready.
<thedac> firl: https://jujucharms.com/u/openstack-charmers-next/
<marcoceppi> can I get a review on https://github.com/juju/docs/pull/975
<firl> thedac: perfect, should I be switching to XenialXerus for it, or stay on trusty for a while
<thedac> firl: mitaka will be supported on both. So it depends on your taste for being on the latest and greatest
<beisner> gnuoy, sure, np
<arosales> cherylj: I am not sure if the machines are showing up in juju status
<arosales> rohit_: do you see any machines in your 'juju status' output re https://launchpad.net/bugs/1566420
<mup> Bug #1566420: lxd doesn't provision instances on first bootstrap in new xenial image <lxd> <juju-core:Triaged> <https://launchpad.net/bugs/1566420>
<cherylj> rohit_, arosales if you do see machines, then I'm willing to bet that it's a dup of bug 1567683
<mup> Bug #1567683: Agents stuck in "Waiting for agent initialization to finish" with lxd provider <ci> <lxd> <network> <juju-core:Fix Committed by cherylj> <https://launchpad.net/bugs/1567683>
<rohit_> yes ..machines are created..but juju agent doesn't initialize
<cherylj> rohit_: can you paste the output of lxc list?
<cherylj> I bet it's that bug above ^^
 * arosales looks at lp:1567683
<rohit_> https://www.irccloud.com/pastebin/C2EsRpP7/
<rohit_> yes.. It's identical issue
<rohit_> identical to 1567683
<cherylj> rohit_: but I don't see a second lxd container for a service you've deployed?
<cherylj> (unless you took that snapshot just after bootstrapping, but before deploy)
<rohit_> I deleted it .. a sec ago
<cherylj> that would explain it
<rohit_> I am cleaning up it all before I switch to older version of juju
<cherylj> rohit_:  you can hack around it by reconfiguring your lxdbr0 to use 10.0.3.1 as the bridge IP.  Should trigger the containers to use 10.0.4.1
<cherylj> as their bridge
<rohit_> ok
<marcoceppi> lazyPower: charmbox:devel doesn't have make installed in it, so make lint fails during reviews
<lazyPower> marcoceppi - why isn't setup installing make?
<marcoceppi> lazyPower: I don't know, report from c0s
<lazyPower> hmm, i suppose thats reason enough to have it in the base image
<lazyPower> lemme see how big it bloats it, 1 sec
<marcoceppi> lazyPower: ack, ta
<c0s> cory_fu: you're right - 'make lint' catches a bunch of the python bugs in the layer's code
<c0s> do you think it would be make sense to add 'make lint' as the part of the charm-build?
<lazyPower> marcoceppi - no noticeable change, incoming PR to add make
<c0s> that pretty much sums it up : https://twitter.com/c0sin/status/719600673466114049
<jamespage> thedac, beisner: hey can either of your +2/+1 https://review.openstack.org/#/c/303329/ - its done a full charm-recheck and completes the remove neutronfromnova work
<thedac> jamespage: will do
#juju 2016-04-12
<gnuoy> jamespage, easy one if you have a moment: https://code.launchpad.net/~gnuoy/openstack-mojo-specs/mojo-openstack-specs-quantum-fix/+merge/291586
<jamespage> gnuoy, ok looking now
<jamespage> gnuoy, +1
<gnuoy> ta
<jamespage> gnuoy, I'd also be tempted todo Quantum/Neutron
<jamespage> but not required - the charms still support both
<gnuoy> jamespage, ok, I'll do that on my next pass if thats ok
<Alex____> Hi, I have a quick question about BigData charms - there was an update announced on the list https://lists.ubuntu.com/archives/bigdata/2016-April/000068.html so my question is: if one tries `juju quickstart apache-hadoop-spark-zeppelin` does it supposed to deploy a bundle with new, layerd\renamed charms?
<TheMue> Alex____: not yet deeply in there, but http://bigdata.juju.solutions is a good starting point. maybe you'll find your answer there
<Alex____> Thanks! I tried :) it mentions this channel. Will try the mailing list then
<TheMue> hehe, ok, the channel here is fine. but it's a bit early. maybe more luck later.
<jamespage> gnuoy, amulet used deployer right?
<gnuoy> jamespage, yes, I believe so
<jamespage> gnuoy, OK I think the move to make our amulet tests used github is failry trivial then
<jamespage> gnuoy, https://review.openstack.org/#/c/304482/ if you have two ticks
<jamespage> gnuoy, nice spot on corosync - just peeking at the packages to see why that happens now
<gnuoy> jamespage, I can't make it out atm. I checked the src package for 3.5 and the dir is still valid looking at the code and docs
<jamespage> gnuoy, it is yes
<jamespage> not sure why the package is not longer creating it tbh
<jamespage> gnuoy, +1 on that
<aisrael> stub (or anyone else), Can you spot what I'm doing wrong setting install_sources? http://pastebin.ubuntu.com/15792985/
<lazyPower> aisrael i dont think you need the full string in there, just ppa:openjdk-r/ppa should suffice
<aisrael> lazyPower: it's getting it to accept all three sources that seems to be the problem
<SaMnCo-laptop> Hi there! Reformulating a question here to share info with everyone
<SaMnCo-laptop> so essentially my problem is that I want to use a layer to create a higher level solution
<SaMnCo-laptop> the underlying layer has configuration (config.yaml), but the default values do not fit my usecase
<SaMnCo-laptop> so I would like to change them
<SaMnCo-laptop> and I was wondering what would be the best practice for that
<lazyPower> SaMnCo-laptop - Great question! So you want to change the default configuration of a lower included layer, and it doesn't make sense to use that layers provided default ever in your inheriting layer?
<lazyPower> SaMnCo-laptop - i ask that as a bit of a leading question if it doesn't make sense to simply set that ocnfiguration option in a bundle.yaml.  If this is to provide a sane default all around - simply override the config option in your top most layers config, yaml
<SaMnCo-laptop> lazyPower-  exactly. The example to be fully transparent is openjdk. By default it would only provide jre, and I need full (which is jdk + jre)
<SaMnCo-laptop> well, in a bundle that would mean I lose control of the states
<SaMnCo-laptop> plus you don't necessarily always use bundles
<lazyPower> SaMnCo-laptop - as i understand it, there's an interface: java - which would leave all that up to the subordinate which is delivering java
<lazyPower> in which case, overriding config in your layer doesn't make sense unless you are extending layer: openjdk, to provide a new subordinate, which should still go in a bundle.
<SaMnCo-laptop> lazyPower-  well my initial thought was to extend openjdk yes
<lazyPower> SaMnCo-laptop are you deleting the subordinate key in metadata to just consume the states/et-al? then yeah, just override in config and you're on your way to success
<mbruzek> SaMnCo-laptop: kwmonroe wrote the openjdk layer, perhaps he could help with your question
<SaMnCo-laptop> ah I can do that?
<SaMnCo-laptop> interesting
<SaMnCo-laptop> Yeah, I'll also connect with Kevin to fully understand best practice here, but this isn't the first time I sort of get into "how can I change the config values from higher layer charms"
<lazyPower> SaMnCo-laptop - yep, in layer.yaml you can trim keys out of lower layers config and metadata.
<SaMnCo-laptop> since there are also the layer options that can be tweaked
<SaMnCo-laptop> ah so in layer.yaml, what I am  changing is actually the config.yaml options from lower layers
<SaMnCo-laptop> that's what I wanted to nail
<mbruzek> Yes think of layer options as build time configuration, but config as user level runtime config
<lazyPower> well, yes and no SaMnCo-laptop
<SaMnCo-laptop> ShrÃ¶dinger config option
<SaMnCo-laptop> now let me open the box :P
<lazyPower> SaMnCo-laptop https://jujucharms.com/docs/devel/reference-layer-yaml#yaml-modifications
<lazyPower> SaMnCo-laptop so dont confuse that with the options declared in layer.yaml - which as mbruzek  stated are build time constructs, or immutable options that are only exposed at build time
<SaMnCo-laptop> never the less, this is good and what I was looking for
<SaMnCo-laptop> I don't know why I missed that
<lazyPower> happy to help :) o/
<SaMnCo-laptop> thx buddy!
<marcoceppi> jcastro: have you updated to lxd 2.0.0 from xenial yet?
<SaMnCo-laptop> lazyPower-  So something is still unclear in that link
<SaMnCo-laptop> I can delete keys from config, that is great
<SaMnCo-laptop> but how do I set them for the underlying layer?
<SaMnCo-laptop> like if I delete the java version it's great, less to think about at deploy time, but I would still need to pick one of the avialable options in my code;
<SaMnCo-laptop> in that case, should I delete and recreate the same option in the new config.yaml?
<SaMnCo-laptop> from the upper layer?
<lazyPower> SaMnCo-laptop you dont need to implicitly delete, layers by default override anything declared further below
<lazyPower> config.yaml keys get merged, top most layer wins, so if you declare in config.yaml your overriding config option, it will 'just work' for this usecase
<SaMnCo-laptop> ok so in my case I am just resetting the value in my upper layer
<SaMnCo-laptop> ok cool
<SaMnCo-laptop> useful
<SaMnCo-laptop> thanks
<cmars> charm-tools just upgraded and i got this: https://paste.ubuntu.com/15794377/
<cmars> i'm on xenial
<gnuoy> tinwood, didn't you see something like ^ ?
<rick_h_> marcoceppi: I thought that was corrected a while back when this got packaged? ^
<tinwood> cmars, gnuoy: yes, marcoceppi has fixed this and it's going into the ppa as we speak.
<marcoceppi> cmars rick_h_ it's a known issue, it's because of amulet in stable ppa
<rick_h_> marcoceppi: ah gotcha
<marcoceppi> cmars: just pushed a fix to the ppa, and backporting the now stable charm-tools and charm from xenial to trusty/wily in the ppa
<jcastro> marcoceppi: I can, I'm a day behind
<marcoceppi> jcastro: it'll break networking, as an fyi
<marcoceppi> jcastro: you'll have to run a thing as a heads up
<jcastro> ok
<magicaltrout> what have i ballsed up this time
<magicaltrout> I thought you set your review queue issue to new to get it back in the queue?
<marcoceppi> magicaltrout: yes
<magicaltrout> well i set it to new 23 hours ago
<magicaltrout> the last thing in the queue was 2 days ago
 * marcoceppi checks this bloody old review queue
<magicaltrout> the review queue is about as stable as XWindows when I unplug my displaylink adapter
<marcoceppi> magicaltrout: in so many words, yes
<marcoceppi> magicaltrout: we've got a /so much better instant updated review queue/ on it's way, we hope to have a public beta in the next week or two
<marcoceppi> magicaltrout: I've kicked, sharply, it'll take an hour or so to work through the backlog and you'll find your review in there
<magicaltrout> thanks marcoceppi no problem
<bdx> charmers: can someone point me to an example of how to bootstrap an openstack cloud using juju2?
<marcoceppi> bdx: the release notes is the best place at the moment, let me dig
<bdx> marcoceppi: thanks. I think I'm close ...
<lazyPower> cory_fu http://paste.ubuntu.com/15795105/ - have you seen this? I'm getting this on a xenial deployed host
<bdx> marcoceppi: clouds.yaml -> http://paste.ubuntu.com/15795119/, credentials.yaml -> http://paste.ubuntu.com/15795152/
<cory_fu> lazyPower: No, but I'd check `env` for some strange LC settings
<lazyPower> cory_fu - this seems to go deeper, http://paste.ubuntu.com/15795170/
<lazyPower> so id ont think its base layer related, looks like poss an image issue?
<marcoceppi> bdx: that looks right
<bdx> marcoceppi: I feel like I need to add a bunch of other configs like `image-stream`, `image-metadata-url` .... should these go in clouds.yaml ?
<marcoceppi> bdx: don't you have to add an endpoint to the region?
<marcoceppi>           regions:
<marcoceppi>              RegionOne:
<marcoceppi>                 endpoint: http://<whatever>/1.0
<bdx> marcoceppi: so ... exactly .... here is what I get when trying to bootstrap -> http://paste.ubuntu.com/15795224/
<marcoceppi> bdx: IIRC, it should just look up image information from the provider
<cory_fu> lazyPower: I'm in a meeting right now, so I'm not going to be of much help.  Maybe tvansteenburgh can offer some insight
<marcoceppi> bdx: you're going to hvae to forgive me, I know this is a silly question, but you're def just censoring the ip address for privacy, right?
<bdx> marcoceppi: yea
<marcoceppi> bdx: cool, just wanted to double check
<lazyPower> cory_fu - we have an update, there's a package missing on xenial and debug-hooks gave us some insight when properly trapped
<marcoceppi> bdx: is RegionOne a valid name in the openstack?
<bdx> marcoceppi: yes
<marcoceppi> hum, bdx I'm not sure, rick_h_ do you know who can help with the new openstack cloud/credentials setup?
<bdx> marcoceppi: so, basically I am specifying the endpoint in the wrong section of clouds.yaml?
<marcoceppi> bdx: well, each region should have it's own endpoint, right?
<jrwren> bdx: sounds like your novarc wasn't sourced.
<jrwren> bdx: oh, nevermind. I read up now. i'm wrong.
<marcoceppi> bdx: could you move endpoint as a key under RegionOne?
<rick_h_> bdx: so the config is passed in the --config=yaml or --config key=value to the bootstrap command
<bdx> rick_h_: yeah ... yeah possibly I need to be specifying the credentials.yaml when I bootstrap?
<bdx> marcoceppi: I'll try that now
<rick_h_> bdx: sorry, otp and tring to multi-task
<lazyPower> rick_h_ - is that how we flip amazon image stream from release to daily?
<bdx> np
<lazyPower> rick_h_ juju bootstrap --config image-streams='daily' or somesuch?
<rick_h_> lazyPower: yes, any config that was in the env.yaml file is moved to the --config or key=value args on bootstrap of the controller
 * rick_h_ pulls up release notes for that section
<rick_h_> lazyPower: bdx the idea is clouds never change really, but how you bootstrap to them might. So that config isn't in the cloud as you could bootstrap with and without daily streams/etc
<lazyPower> rick_h_ - solid, thanks :) you just unblocked mbruzek and I
<bdx> rick_h_: ok, so I must pass key=value params when bootstrap openstack cloud?
<rick_h_> bdx: or a --config something.yaml
<rick_h_> bdx: see the section "#### Model Configuration at Bootstrap
<bdx> rick_h_: alongside --config credentials.yaml ?
<rick_h_> bdx: in https://docs.google.com/document/d/1ID-r22-UIjl00UY_URXQo_vJNdRPqmSNv7vP8HI_E5U/edit#
<bdx> marcoceppi: using ->  http://paste.ubuntu.com/15795504/ I get the same error
<bdx> http://paste.ubuntu.com/15795520/
<bdx> ok, I think credentials.yaml is automatically being picked up
<bdx> so I probably don't need to specify it...
<bdx> hmmm
<marcoceppi> bdx: right, it will be
<marcoceppi> bdx: ERROR index file has no data for cloud
<rick_h_> bdx: no, credentials.yaml are added through add-credential
<marcoceppi> index file is interesting
<marcoceppi> I wonder if that refers to the index file for images
<rick_h_> bdx: right, it'll grab the default for the cloud if you've added them
<rick_h_> marcoceppi: I bet
<marcoceppi> rick_h_: no --debug flag for bootstrap anymore?
<rick_h_> marcoceppi: :/ ummm, didn't realize that
<bdx> marcoceppi: yeah, it wants an image streams location e.g. somewhere where the index.json references an image id from my cloud ....
<marcoceppi> rick_h_: not showing in `juju help bootstrap`
<rick_h_> marcoceppi: yea, noticing that
<marcoceppi> bdx: can you try with the --debug flag anyways on bootstrap? that might help lock down a location
<marcoceppi> rick_h_: hoepfully it's just a silent flag
<rick_h_> marcoceppi: want to test it out, I don't know why it'd be removed
<rick_h_> marcoceppi: yea, exactly
<bdx> marcoceppi, rick_h_: which I have setup, I just don't know how to specify it as config ....
<marcoceppi> bdx: ahhh, okay, this probably lives in the clouds.yaml
<rick_h_> marcoceppi: bdx yes, that seems like it's common to any bootstrap and part of talking to OS
<rick_h_> marcoceppi: bdx I'd expect that to be in the clouds.yaml info
<marcoceppi> bdx: you should be able to put them as parent keys at the same level as regions as the same key name that was in the old environments.yaml
<bdx> marcoceppi: ok, trying now
<rick_h_> beisner: do you have a sample juju 2 clouds.yaml for openstack around?
<bdx> rick_h_, marcoceppi: http://paste.ubuntu.com/15795783/
<bdx> gives the same index file error
<marcoceppi> butts, hopefully beisner knows, if not I'll go pok some other people
<bdx> marcoceppi, rick_h_: sweet, thanks
<rick_h_> bdx: what's the exact error?
 * rick_h_ goes to look at the source for the openstack config
<bdx> http://paste.ubuntu.com/15795844/
<bdx> rick_h_, marcoceppi: some concrete docs on how to bootstrap an openstack cloud using juju2 would be a huge win
<rick_h_> bdx: +1
<rick_h_> bdx: https://jujucharms.com/docs/devel/clouds#specifying-additional-cloud
<marcoceppi> rick_h_: yeah, looks like we need to expand these a bit
<marcoceppi> as soon as we figure this out bdx, I'll be opening a merge to update them
<rick_h_> bdx: looking at that can you try https://pastebin.canonical.com/154062/ please?
<rick_h_> marcoceppi: yea, I know I got a working copy from sinzui at one point in the past but can't find the link
<bdx> ok, yea, omp
<jhobbs> :q
<jhobbs> oops
<bdx> rick_h_: I haven't permission to access either of the links you have pasted me
<rick_h_> bdx: ah sorry
<marcoceppi> bdx: http://paste.ubuntu.com/15796065/
<bdx> I'm not that cool man
<bdx> awesome
<rick_h_> http://paste.ubuntu.com/15796067/
<rick_h_> bdx: ^
<rick_h_> sorry, autocomplete fail
<jrwren> bdx: I dont know as much as these guys about juju2, but it seems to me you need to fill in that <keystone-ip> with your keystone IP.
<bdx> rick_h_, marcoceppi: I get the same error
<bdx> jrwren: tracking
<rick_h_> bdx: ok, and you've gone through https://jujucharms.com/docs/master/howto-privatecloud ?
<bdx> yeah, I have juju1 controllers bootstrapped all over the place using my local image and tools streams
<rick_h_> bdx: ah ok
<bdx> seeing as the error is referencing `index` file .... I'm inclined to think the heart of this issue revolves around juju not knowing the location of my tools/metadata
<marcoceppi> bdx: it's a good guess
<rick_h_> bdx: http://paste.ubuntu.com/15796290/ is what I got from sinzui
<bdx> rick_h_: that is the same thing I have going on .... how is he specifying his image-stream/image-metadata-url ?
<bdx> those are required params yea?
<rick_h_> bdx: I don't think so if it's standard location
<bdx> rick_h_: standard location?
<bdx> rick_h_: simplestreams tools and metadata
<bdx> rick_h_: tools and metadata must be generated on a per cloud basis ....
<rick_h_> bdx: right, but I *think* they're in a specific key in keystone
<bdx> rick_h_: ehh, a user would define a custom location to where there per cloud generated tools and metadata are ... because there is no generic location ... because they must be generated per an image id  that exists in your cloud right?
<bdx> their*
<marcoceppi> cmars tinwood rick_h_ amulet - 1.14.2-0ubuntu4 fixes the python-path dep issue, charm-tools, amulet, and all are co-installable again
<marcoceppi> urulama rick_h_ charm and charm-tools landed in xenial last night \o/ I'm drafting up the release announcement now to the list. they've already been backported to the stable ppa for trusty and wily
<cmars> marcoceppi, thanks! updating now
<stub> aisrael: It needs to be a yaml list encoded as a string, Because you don't have line breaks or indentation in there, it isn't a yaml list but a yaml string. Try setting it to install_sources="['sourcea', 'sourceb']"
<bdx> charmers, marcoceppi, rick_h_, stub: I just pm'd stub with this, thought I would paste this here instead of retyping it so others can see -> http://paste.ubuntu.com/15797232/
<skaul> hi, I have a doubt for charming a product which does not support silent way of installation and user has to provide input through the command line. How can we charm such types of products where silent mode of installation is not present.
<marcoceppi> skaul: well, you could have the user provide that information via the configuration for the charm, then when all is collected execute the installation prefilling the prompts either via flags on the command or creating an expect script that answers the prompts
<stub> bdx: I don't know sorry. I've only used Swift, and even then only with credentials provided by others.
<bdx> stub: using db's > 30GB ?
<stub> No
<bdx> jesus
<bdx> ok
<bdx> well thanks for writing such an awesome charm
<stub> wal-e can do it. I don't know how to tell ceph to give you a large quota
<bdx> its not ceph
<skaul> the user input provided via the command line has to be read by our juju script which we are writing to automate, through expect script we can get all this user responses and provide it to our install script?
<bdx> stub: http://docs.openstack.org/liberty/config-reference/content/object-storage-account-quotas.html
<bdx> stub: I feel this `accout quota` data is stored in one of the .rgw buckets
<bdx> `account quota`
<bdx> or possibly in .users.swift
<stub> No idea. I do know I've stuffed over 16TB in swift using other tools, just not wal-e.
<bdx> I have spent a good amount of time trying to figure this out, although I'm getting time boxed by higher ups and am going to have to focus on other things atm
<bdx> hopefully I have aired this out well enough that others might look into it
<marcoceppi> skaul: you can have the user fill in this information via the config.yaml file. So you say essentially declare these are the configuration options I need YOU the user to provide, they provide that, once you have it all collected ten you can run the setup script
<stub> bdx: You might need to air it out in openstack or swift channel, unless one of the openstack charmers here happens to know.
<bdx> stub: yea ...shoot ... this is not an issue when using swift backend
<bdx> stub: and radosgw isn't an `openstack` project
<bdx> lol
<skaul> ok, thanks
<bdx> stub: thanks, I'll poke around it the ceph chanels, but radosgw is really a grey area openstack service I feel like
<bdx> people who are using ceph alongside/underneath their openstack (the majority) aren't going to want to provision a swift solution too
<cory_fu> kwmonroe, admcleod-: New charms.reactive released with the fix for the str.pop bug
<kwmonroe> +1 cory_fu!  thx
<cmars> ok, i've upgraded charm-tools, now I'm getting this issue with `charm build`: https://paste.ubuntu.com/15798307/
<cmars> should i open a bug?
<rick_h_> lol, "otherstuf"?
<tvansteenburgh> marcoceppi: ^
<cmars> https://i.imgur.com/HvMP3pE.jpg
<marcoceppi> cmars: do you have python-otherstuf instlaled?
<cmars> marcoceppi, probably not
<marcoceppi> cmars: you should have gotten it
<cmars> marcoceppi, nope, it wasn't installed. missing a dependency in the deb?
<marcoceppi> cmars: it looks like, eek, how did this happen
<cmars> lol, python-stuf and python-otherstuf
<marcoceppi> cmars: yeah, it's stupid. /me glares at whoever committed that into the project
<cmars> stuf is the new utils
<cmars> oh snap, now i'm missing pathspec: https://paste.ubuntu.com/15798416/
<cmars> and ruamel.yaml
<marcoceppi> cmars: O_O python-pathspec python-ruamel.yaml
<marcoceppi> cmars: I have no idea how this broke from devel ppa to archive
<cmars> marcoceppi, ok that's all that was missing. my mattermost layer is now happy again :)
<marcoceppi> cmars: dang dude, thanks for pushing through, I'll have a new package ready for update to xenial with these added
<cmars> marcoceppi, no worries, that's why i run xenial and run this stuff everyday :)
<tinwood> thanks marcoceppi
#juju 2016-04-13
<jamespage> gnuoy, now that juju 1.25.5 is release we could also add series data to the metadata.yaml
<gnuoy> kk
<gnuoy> jamespage, got time for https://review.openstack.org/#/c/304193/ and https://review.openstack.org/#/c/304513/ ?
<jamespage> gnuoy, +2'ed first; second still needed a recheck-full so +1 with comment
<gnuoy> thanks
<jamespage> gnuoy, could you look at https://review.openstack.org/#/q/topic:shared-db-spaces for the canonical ci +1'ed ones polease
<gnuoy> sure
<jamespage> gnuoy, they are pretty much copy/pastes...
<gnuoy> jamespage, are you still planning to review https://review.openstack.org/#/c/299669/ for thedac ?
<jamespage> gnuoy, I have some drafts pending...
<jamespage> gnuoy, OK so I'm happy with the way we're installing the profiles...
<jamespage> gnuoy, my comments are all on the profiles hemselves
<gnuoy> jamespage, could you look at https://review.openstack.org/#/q/topic:pre1604chsync  for the canonical ci +1'ed ones please?
<jamespage> gnuoy, ok all done where passing
<gnuoy> thanks
<jamespage> gnuoy, it might be nice to try to move to a global-requirements approach for the charms
<jamespage> where we maintain the list of versions centrally; and we get auto-resyncs when that changes
<jamespage> I think the openstack projects do much the same - worth digging into next cycle...
<gnuoy> defo
<jamespage> gnuoy, getting there is odl-controller - the config overrides for the single test where not quite right
<jamespage> gnuoy, https://github.com/openstack-charmers/bot-control for future reference
<gnuoy> ah, ok
<gnuoy> jamespage, fwiw https://review.openstack.org/#/q/topic:pbr-reqs is wandering toward the gate atm
<jamespage> gnuoy, awesome
<jamespage> gnuoy, I've kicked off a full recheck but https://review.openstack.org/#/c/304726/ would be ready for general review...
<jamespage> gnuoy, pbr ones are good to go on a unit test only - but I will wait for UOSCI to confirm that as well
<jamespage> so we wait for smokes...
<jamespage> gnuoy, beisner: I've seen the occasional
<jamespage> 10:45:44 WARNING:root:SSH command check FAIL for 172.17.114.66
<jamespage> during _single tests
<gnuoy> jamespage, ditto
<jamespage> all units are reporting OK...
<jamespage> gnuoy, re https://review.openstack.org/#/c/299669/
<jamespage> my comments are all about the profiles themselves rather than how they get applied
<gnuoy> ok
<gnuoy> jamespage, I'm not sure what to take from that comment tbh
<gnuoy> Are you saying you don't want to land it?
<jamespage> gnuoy, I'm looking for a second opinion on whether we should block on v0.1 profiles
<jamespage> gnuoy, not decided either way just yet
<jamespage> having the feature in allows us to iterate the profiles more easily...
<jamespage> and its default is 'off' so that's ok as well
<gnuoy> jamespage, I think land it, I'll say so on the mp
<jamespage> gnuoy, OK kicked off landing...
<jamespage> gnuoy, hit a mitaka problem with ovs-odl
<jamespage> https://review.openstack.org/305121
<jamespage> gnuoy, gah!
<jamespage> 11:11:37 pkg_resources.VersionConflict: (requests 2.7.0 (/usr/lib/python2.7/dist-packages), Requirement.parse('requests==2.6.0'))
<gnuoy> jamespage, where are you seeing that ?
<jamespage> gnuoy, amulet_full
<jamespage> gnuoy, http://10.245.162.36:8080/view/Dashboards/view/TestCharmPipeline/job/test_charm_amulet_full/391/console
<gnuoy> jamespage, I hit something similair http://paste.ubuntu.com/15808377/ and was asking beisner for a second pair of eyes
<jamespage> gnuoy, I think I know the problem
<jamespage> gnuoy, yah the version of theblues in the stable ppa is to old and has a fixed requests version requirement
<jamespage> gnuoy, the version in xenial is OK - I'll backport that now
<jamespage> marcoceppi, ^^
<gnuoy> thanks
<jamespage> gnuoy, I think the difference is that smoke does not use juju-test
<jamespage> where as recheck-full does
<jamespage> gnuoy, ok taking lunch a bit early toallow the queue to clear
 * jamespage goes to rotavate the veg patch
<gnuoy> I can tell whether that is a euphemism
<gnuoy> * can't
<tinwood> gnuoy, so specs/full_stack/next_deploy/mitaka runs without a hitch now.  Trying icehouse now.
<gnuoy> tinwood, fantastic on multiple levels
<tinwood> gnuoy, from your gftw branch with the proposed fix.
<tinwood> gnuoy, yes, I can't quite believe it.  I'm going to run it again to make sure!!
<tinwood> gnuoy, looks like a bug in nova-compute charm for icehouse.  nova-api and nova-network not running and they have neutron-openvswitch subs.
<gnuoy> jamespage, could you look at https://review.openstack.org/#/q/topic:pbr-reqs for the canonical ci +1'ed ones please?
<gnuoy> tinwood, seeing the same thing just now
 * gnuoy goes to look and see whats landed
<tinwood> gnuoy, I'm having a quick look at the charm.
<tinwood> gnuoy, maybe we don't have an amulet test for this scenario?
<gnuoy> tinwood, I'd be suprised if we don't
<tinwood> gnuoy, although perhaps it's when the legacy neutron management was dropped??
 * tinwood is clutching at straws
<gnuoy> tinwood, not sure, looking now
 * tinwood goes to get tea whilst thinking ...
<jamespage> gnuoy, looking
<jamespage> gnuoy, https://review.openstack.org/#/c/304638/ is ready for review
<jamespage> gnuoy, ok all done where possible
<jamespage> gnuoy, tinwood: hmm thedacs apparmor changes introduced restarts for nova-api and nova-network...
<gnuoy> thanks
<jamespage> I wonder whether than needs to be conditional on running nova-networking nodes...
<tinwood> jamespage, yeah, I see where you mean.  They are unconditionally added to the BASE_RESOURCE_MAP (L175 in nova_compute_utils.py)
<jamespage> tinwood, hmm yeah
<jamespage> that's bad
<tinwood> actually L201-212
<tinwood> jamespage, is there a config() item that they should depend on?
<gnuoy> tinwood, fix is almost done
<gnuoy> jamespage, ^
<jamespage> ?
<jamespage> oh ..
<jamespage> I'd check the exsisting code in the charm - it must do something conditional based on whats passed to it from nova-cc
<gnuoy> jamespage, I am
 * jamespage is puzzled why nova-compute's amulet tests failed
<jamespage> did not fail rather...
<gnuoy> do you mean didn't fail ?
<gnuoy> yeah
<tinwood> jamespage, me too.  Or it needs something extra.
<gnuoy> jamespage, tinwood https://review.openstack.org/#/c/305261/
 * tinwood is looking ...
<jamespage> gnuoy, nit on your commit message but that is all
<jamespage> gnuoy, you might wanna ref the bug at the same time...
<tinwood> gnuoy, shouldn't we also have a test for this in case it regresses in the future?  The existing tests didn't pick it up.
<beisner> tinwood, +Lots for adding coverage when we hit a gap.  i've not dug into the actual issue, but add'l unit or amulet test to hit that would be appreciated.
<gnuoy> tinwood, we should absolutely, but I think we need to unblock things by landing this first
<tinwood> beisner, gnuoy okay - let's land a fix to unblock, but make a bug or card to add tests?   - although this feels wrong :(
<beisner> tinwood, tldr on what broke?
<gnuoy> tinwood, still landing changes 5 days after the change freeeze is the big thing thats wrong
<gnuoy> beisner, https://bugs.launchpad.net/charms/+source/nova-compute/+bug/1569886
<mup> Bug #1569886: Nova-compute incorrectly reports workload status <nova-compute (Juju Charms Collection):In Progress by gnuoy> <https://launchpad.net/bugs/1569886>
<tinwood> gnuoy, indeed.
<aisrael> rick_h_: Who should I poke about charm store ingestion? This charm (https://code.launchpad.net/~treasure-data/charms/trusty/td-agent/trunk) was pushed ~10 hours ago but isn't deployable or found in the store
<tinwood> gnuoy, if we create a bug for add the tests, then you can assign it to me, and I'll pick it up post 16.04
<rick_h_> aisrael: urulama and the team were working on a production issue with that this morning
<rick_h_> bac: mhilton is there any news for aisrael? last I recall seeing IS thought it was 'still running'
<beisner> gnuoy, tinwood - see comment on https://code.launchpad.net/~ajkavanagh/openstack-mojo-specs/mojo-openstack-specs-fix-novaclient/+merge/291734 ... i've switched osci to run specs on that branch to confirm before we land.  thx.
<gnuoy> tinwood, Bug #1569896
<mup> Bug #1569896: Nova-compute amulet tests do not seem to check workload status <nova-compute (Juju Charms Collection):New for ajkavanagh> <https://launchpad.net/bugs/1569896>
<tinwood> beisner, that seems fine.  I put the caveat in for you to see :)
<gnuoy> beisner, fwiw I'm using my git enabled version of mojo specs for my testing
<tinwood> gnuoy, thanks.  I'll put it on my todo list.
<beisner> gnuoy, please rebase and proposed back @ our specs.  we'll need to do some rapid fire merges there i suspect.  we need to make sure everyone is using the same stuff.
<gnuoy> beisner, I have done
<gnuoy> I rebased after landing your branch
<bac> aisrael, rick_h_: charmstore was upgraded over night.  the ingestion job recently finished. can you check again?
<beisner> tinwood, the recent ceilometer api change in the amulet test did break precise, and i've found a middle ground there that works with p-->x.  this nova mojo change looks very similar, so i just want to confirm before we land.
<aisrael> bac: rick_h_: Still not seeing it, at https://jujucharms.com/u/treasure-data/td-agent/trusty/ or deploying cs:~treasure-data/trusty/td-agent
<tinwood> beisner, okay sounds good.
<jamespage> gnuoy, ok so how about we revert the apparmor change and defer that all until 16.07
<jamespage> I'm equally as concerned that we have features landing late...
<urulama> aisrael: no, td-agent did not get published, we can check the logs to see what the issue is
<aisrael> urulama: That'd be great, thanks!
<jamespage> gnuoy, beisner: heat appears to have a amulet issue generall with the restart on change tests..
<beisner> gnuoy, cool.  let's land your git wares then :)  i think there are more specs to update similarly though
<beisner> jamespage, heat full was passing before the c-h sync @ https://github.com/openstack/charm-heat/commit/52fd7b261c8b2ce11cb55a054d0eb1856a2b6dcb
<bac> aisrael: i'm still investigating
<urulama> jrwren: would you check in freenode #juju channel and then check the charmload logs on guimaas to see why td-agent is not picked up, please?
<gnuoy> jamespage, fwiw I'm pondering the aparmor thing
<gnuoy> I may even consider spelling it correctly soon
<beisner> jamespage, however ... i actually suspect amulet rev causing that fail (00:21:18.043 ValueError: could not convert string to float: ssh_exchange_identification: read: Connection reset by peer)
<jamespage> gnuoy, I think we're rushing it in to-late
<gnuoy> jamespage, ok, agreed
<jamespage> gnuoy, +2 on reverting that change and doing a proper job for 16.07
<jamespage> gnuoy, lemme raise the revert
<gnuoy> jamespage, thanks
<jamespage> thedac can hate me then
<gnuoy> haha
<tinwood> jamespage, is that across all charms or just nova-compute?
<jamespage> gnuoy, https://review.openstack.org/#/c/305275/
<jamespage> tinwood, only compute has apparmor so far
<tinwood> jamespage, oh. my bad.  I thought it was like pause/resume.  never mind.
<chrido> Hi, Where do I set the http-proxy in juju 2.0 beta3?
<tinwood> gnuoy, okay, what's next for me - I'm pending the nova-compute revert for running the next / icehouse test.
<gnuoy> tinwood, I'd suggest taking a look at that nova-compute amulet bug, it's still a valid bug even if the code tripping it has been reverted
<tinwood> gnuoy, okay, I'll look but I'll have to wait until the apparmor is reverted.
<gnuoy> tinwood, why >
<gnuoy> ?
<marcoceppi> jamespage: ack, thanks for backporting that
<gnuoy> tinwood, Once  apparmor is reverted we need to be back on the mojo functional tests
<marcoceppi> jamespage: my plan was to backport allthe new packages from xenial -> ppa/stable for wily and trusty I just got caught up in other things yesterday
<tinwood> gnuoy, the code's going to be reverted that caused the bug?
<gnuoy> tinwood, the bug is that we believe amulet doesn't check workload status. that's true irrespective of the state of the apparmor code
<tinwood> gnuoy, yes re: mojo testing.
<urulama> aisrael: nothing stands out in the logs. we're gonna try to push your charm directly and see what happens/catch any errors
<aisrael> urulama: Thanks for the update!
<tinwood> gnuoy, kk - I'll have a look at the amulet tests for nova-compute re: workload status
<gnuoy> tinwood, thanks.
<jrwren> aisrael: We are still investigating why that td-agent charm is not ingesting. In the mean time we have verified that it can be pushed and published using that new method, if you care to try.
<tinwood> gnuoy, so the nova-compute amulet tests don't exhibit the problem because the scenario doesn't use neutron-openvswitch --- that's me hazarding a guess?
<suchvenu> Hi
<thedac> jamespage: wrt, apparmor, bummer. But glad we caught the issue
<jamespage> gnuoy, we need to get https://review.openstack.org/#/c/305275/ in
<jamespage> ci failed..
<gnuoy> tinwood, ok, yes, that would make sense, so we are checking workload status its just we didn't truip te workload status bug
<gnuoy> jamespage, are you proposing we land it regardless because I've got a +2 in my pocket and I'm not afraid to use it
 * tinwood looks the other way
<suchvenu> I am facing some issue while deploying more than 1 consumer charm with my charm.
<suchvenu> For the first service charm when deployed and when related to my charm sets some states . When the 2nd service charm is deployed, looks like the states are all set and it doesn't go as per the expected flow
<suchvenu> My scope is SERVICE.
<suchvenu> How do I reset the states so that for the next service (or consumer) it goes as per the correct flow ?
<tinwood> gnuoy, the only time status is actively sought is in the pause-resume check, otherwise it's just looking at the messages via the OpenStackAmuletDeployment base class.
<tinwood> gnuoy, however, the p-r test would notice the blocked on resume and crash out.
<gnuoy> tinwood, ok
<thedac> gnuoy: tinwood: I would have expected self._auto_wait_for_status in __init__ to check for status. That is odd it did not.
<tinwood> thedac, it does.
<gnuoy> thedac, I think it did but the amulet tests don't use neutron
<gnuoy> so the services were runnign
<tinwood> In the amulet tests everything is okay as gnuoy says it doesn't use neutron.
<thedac> got it
<gnuoy> jamespage, https://review.openstack.org/#/c/305275/ passed amulet smoke
<thedac> gnuoy: jamespage I'll +2 it and weep quietly to myself.
<thedac> although http://10.245.162.36:8080/job/test_charm_pipeline/954/console is still running
<jamespage> thedac, sorry...
<thedac> gnuoy is too quick
<thedac> not your fault
<jamespage> I appreciate the efforts you have made to get this feature in :(
<jamespage> thedac, +1000 and lets make that rock for 16.07
<tinwood> it looks like it needs a slightly harder poke - it's on 'needs verified'
<cory_fu> suchvenu: Just to confirm, you do mean an entirely second service (with a different name in `juju status`) and not another unit of the same service that you got with add-unit, right?  If so, you should be aware that the states set for the first service will remain in effect for that service, but you should get a -relation-joined and -changed events for the second service, which should start a separate conversation flow
<suchvenu> yes a separate service , not a unit
<suchvenu> I guess I am getting realtion-joined and -changed for 2nd service. But there are some states apart from these in my reactive layer
<cory_fu> You should also keep in mind that states set in the charm layer apply to the charm as a whole.  Only states set in the interface layer, using conv.set_state(), will be per-service
<cory_fu> suchvenu: Any chance I could see the code in question?
<suchvenu> sure
<suchvenu> how do I share?
<suchvenu> yes, the these states are set in reactive layer
<cory_fu> If it's in a public repo, a link would be good.  Otherwise, a pastebin or even email would be fine
<suchvenu> its still not pushed into lauchpad
<suchvenu> pastebin link ?
<cory_fu> That's fine.
<suchvenu> how do I share in Pastebin?
<cory_fu> suchvenu: You can just put it in http://pastebin.ubuntu.com/
<suchvenu> done
<cory_fu> And then paste the resulting URL here (or PM me)
<beisner> zul, https://review.openstack.org/#/c/304626
<beisner> rockstar, https://review.openstack.org/#/c/303175
<suchvenu> http://pastebin.ubuntu.com/15812802/
<beisner> ^ ie. review/land each other's wares :)
<cory_fu> suchvenu: Is that code from the consumer charm?
<suchvenu> no.. my charm
<suchvenu> db2 charm
<cory_fu> I don't see in that code where the db.sshconfigured state is ever set
<suchvenu> oh.. i didn't paste full code i guess
<suchvenu> see here pls. http://pastebin.ubuntu.com/15812926/
<rockstar> beisner: yeah, I can't vote on that patch at all (hilariously, I can do Code-Review vote on my own patches)
<beisner> rockstar, you should be able to do a +1
<rockstar> beisner: oh, I can now. Weird. I didn't have that box until a refresh.
<suchvenu> cory_fu , could you see the code now ?
<cory_fu> suchvenu: Ok, I see.  By setting that db.sshconfigured state and including @when_not, you're basically telling the charm that you only ever want to do that block once.  What you actually want is to run that block *every* time, and just depend on your grepping logic to skip keys that are already added.  That way, whenever a new key is provided (or if a key is changed), it will get added.  It does mean, though, that this handler will run for every
<cory_fu> hook, but as long as it handles being run again with the same data ok (which it should because of the grep), then that should be fine
<suchvenu> I have few more functions after this which are based on previous states
<suchvenu> all that would get affected
<cory_fu> One other thing, though, is that get_sshkeys should return a list of keys, and you should loop over them.  I'm not 100% certain how that will work in bash, though.  I'll need to do some tests.
<cory_fu> suchvenu: Oh, you have other handlers that depend on db.sshconfigured?  Well, you can still set that state once you have at least one key, as long as you remove the @when_not decorator so that the file update still happens every time
<suchvenu> yes
<suchvenu> http://pastebin.ubuntu.com/15813149/
<suchvenu> This pastebin has the subsequent functions. Please look once. Can we reset all states per service ? That would actually solve the problem i guess. Not sure how to do that.
<zul> beisner: yeah i dont have +2
<beisner> zul +1 and i can land
<zul> beisner: done
<cory_fu> suchvenu: Sorry, I'm caught up in a meeting.  I will review your latest link shortly.
<zul> beisner: dont forget btrfs ;)
<suchvenu> sure, please mail me if I am not online
<terje> I'm deploying a charm I'm working on to private openstack clouds. In doing so, I need a few things (endpoints, username/pw, token's), etc.
<terje> Currently I put them in config.yaml and use config-get to access them when the charm deploys but since they differ, per install this isn't going to work
<terje> Is there a better way to go about this?
<marcoceppi> terje: that's the only way to get credential about the environment. What's the purpose of this charm? Is it to be deployed alongside an openstack (like another openstack service) or to do things to openstack from within it?
<terje> I'm using this charm to deploy a heat stack.
<terje> I have a user that uses juju and charms for everything and wants a way to deploy an application using juju, so basically I'm just doing a stack-deploy.
<terje> but in order to do so, I need to know, for example the network uuid, OS_USERNAME, etc.
<gnuoy> thedac, can you put your mojo trousers on today and try and get some green on the spreadsheet ?
<thedac> gnuoy: I can. Are your changes landed?
<tinwood> gnuoy, so using your branch, the icehouse has also passed (I ran it during the meetings).
<terje> is there a way to make 'juju status' default to 'juju status --format=tabular' ?
<rick_h_> terje: you can export the environment variable. /me goes to look it up
<marcoceppi> terje: export JUJU_CLI_VERSION=2
<marcoceppi> terje: it seems odd that you'd use juju to deploy stuff via heat
<marcoceppi> it's not a scenario I've come across before
<thedac> gnuoy: beisner have the required mojo spec changes landed yet?
<terje> yea, it's very weird.
<beisner> thedac, still validating the nova client changes;   haven't touched the git change proposal yet.
<thedac> beisner: ok, please let me know when to start pushing buttons
<terje> cool, that worked for tabular marcoceppi
<terje> so, I have another question (sorry, pretty new to juju).. The charm I'm deploying has an error in my start hook
<terje> so, it's stuck in 'start'
<terje> a 'juju remove-service' or 'juju remove-unit' do nothing here.
<terje> is there a way to force a kill here?
<marcoceppi> terje: yeah, so when the charm is stuck in a hook error it tends to require admin intervention
<terje> so, login to the VM and kill the process it's haning on?
<marcoceppi> terje: does juju say the hook errored or is ithanging?
<terje> no, it's just in perpetual 'start' mode
<marcoceppi> terje: interesting, would you be able to share the output of `juju status --format yaml` to paste.ubuntu.com ?
<terje> sure - but before that, logging into the worker VM, I see this process just hanging around: /bin/bash /var/lib/juju/agents/unit-epdg-0/charm/hooks/start
<beisner> thedac, will do
<marcoceppi> terje: yes, that's the process
<terje> http://paste.ubuntu.com/15815245/
<marcoceppi> terje: can you send a sigterm to that process?
<terje> done, and it's gone.
<marcoceppi> terje: juju will kill it off,but the default time is like 2 hours. You can also force kill the machine it resides on to avoid logging into the VM
<terje> so, now it's in error state .. I assume I can juju resolved and make it go away
<terje> checking..
<marcoceppi> terje: yes
<marcoceppi> terje: if you just issue a resloved and not a resolved --retry it'll move on to the destroy service hooks
<terje> yea, that worked.
<terje> it's gone.
<marcoceppi> terje: if it's really problematic, `juju destroy-machine --force 1` is a quick way to circumvent a pesky unit if you really want to get rid of it quick
<marcoceppi> where 1 corrosponds to the machine number the unit is deployed on
<terje> ah, great. That's probably what I want while working on this charm. thx.
<marcoceppi> terje: yeah, during dev is a great way to reset things
<terje> so, there's no way to pass variables during: juju deploy local:bleh/bleh ?
<beisner> zul, awaiting rockstar's +1 on https://review.openstack.org/#/c/303175/
<beisner> rockstar ^
<zul> k
<terje> ah-ha. juju deploy --config myconfig.yaml might be it
<marcoceppi> terje: you found it, the format for that is documented here: https://jujucharms.com/docs/1.25/charms-config#configuring-a-service-at-deployment
<beisner> thedac, gnuoy - mojo spec git proposal merged;  still waiting for validation results on the nova client change, expect that within an hr or so.
<beisner> thx for the mods gnuoy
<terje> marcoceppi: using juju deploy --config file.yaml, are those values available to the hooks as (config-get setting) ?
<terje> seems like they are not.. :/
<marcoceppi> terje: yes, they should absolutely be
<marcoceppi> terje: how did you format your file.yaml?
<jcastro> marcoceppi: did you move the layer search? can't find it in juju-solutions
<marcoceppi> jcastro: it was never there. it's in my personal github
<jcastro> oh!
<jcastro> I was going to repull it, having some git issues
<beisner> thedac, ahead of mojo spec change merges, if you have cycles to poke at this, it's a definite new-failure:  bug 1570032
<mup> Bug #1570032: Precise-Icehouse:  publicURL endpoint for network service not found <uosci> <neutron-gateway (Juju Charms Collection):New> <https://launchpad.net/bugs/1570032>
<zul> beisner: mind +1ing https://review.openstack.org/#/c/303175/ please
<beisner> so to be clear, i'm trying to validate tinwood's nova api client usage change against precise, but stumbling over that, have re-confirmed the same outside of mojo specs.  precise is no-talky wrt network api atm.
<beisner> zul, done, it's otw to merge
<zul> beisner: thanks
<tinwood> beisner, sorry if I'm causing problems - do we use precise test runners?
<beisner> tinwood, no.  everything is trusty + liberty cloud archive for client tools
<beisner> tinwood, and that can work all the way back to precise, and i think your change is sound -- i just want to see it work before we merge into the thing that automation consumes.
<beisner> precise has other troubles atm
<tinwood> beisner, ok.
<beisner> thedac, fyi, turning to a precise-icehouse stable charm deployment to validate nova client changes @ https://code.launchpad.net/~ajkavanagh/openstack-mojo-specs/mojo-openstack-specs-fix-novaclient/+merge/291734
<terje> marcoceppi: back from lunch, so my charm is called 'epdg' and looks like this, currently: http://paste.ubuntu.com/15817580/
<terje> I'm not actually sure if I want 'settings' or 'options'
<terje> the deploy error is: ERROR no settings found for "epdg"
<marcoceppi> terje: neither, check this page out https://jujucharms.com/docs/devel/developer-getting-started
<marcoceppi> terje: err, wrong page https://jujucharms.com/docs/1.25/charms-config#configuring-a-service-at-deployment
<beisner> rockstar, can you abandon this one?:  https://review.openstack.org/#/c/301839/
<marcoceppi> terje: it should look more like this http://paste.ubuntu.com/15817918/
<terje> :)
<terje> totally works - so then, OS_USERNAME=$(config-get os_username) for example
<terje> I'm set!
<thedac> beisner: sorry I have been in meetings non stop. Let me look at the bug
<beisner> thedac, np at all.  we can use the stable charm deploy to validate the pending mojo spec nova client api usage change, then land that, and turn folks loose on specs (except mitaka)
<thedac> ok, and is the precise icehouse issue still in play/
<thedac> ?
<beisner> yes, next precise icehouse is bust
<beisner> so that bug is still relevant, but i'm not blocked on exercising the pending mojo spec util change
<thedac> ok
<beisner> thedac, tinwood - mojo spec MP landed.  thanks for the mods.  all should be clear to exercise EXCEPT trusty-mitaka (pending cloud archive pkg flush tomorrow).
<thedac> beisner: great. I'll get started
<beisner> coreycb, swift-storage or swift-proxy?
<beisner> wth won't this merge, icey ? :-/   https://review.openstack.org/#/c/303014/
<cmars> do i still have to use LP to submit a charm for review? i'm developing mine with git and using the new 'charm' command to publish them
<coreycb> beisner, swift-proxy
#juju 2016-04-14
<beisner> coreycb that one has the bzr import issue and isn't in sync @ LP
<coreycb> beisner, oh..  so maybe I'm picking up the LP version
<nullagent> Hi all, having trouble getting juju installed on wily. Just did in an install on a new machine yesterday and everything went just fine. Now today I decided to re-install on my dev machine to make sure I'm charm dev-ing in an identical environment. Eveything fails to start for some reason. No running jujud and my /var/lib/juju is completely empty
<hoenir> hi guys, someone could help me with some commands in juju? I'm trying to remove a pending machine with the command juju remove-machine 0
<hoenir> but when I do juju status I'm getting the machine.. and it does not remove
<hoenir> here is my output on juju status
<hoenir> https://paste.ubuntu.com/15824849/
<hoenir> I was deploying this machine when my machine was suddenly shut down and now I have this machine in "pending.." state how can I remove it and start again the deployment?
<hoenir> opss, to "machine" words..
<hoenir> anyway reading the doc on juju it says that I should run the juju destroy-environment <environment> but how can I know what <environment> I'm in?
<blahdeblah_> hoenir: juju switch will tell you which env you're currently in
<hoenir> ohh but something tells me that I should not remove this... anyway how can I remove that problematic machine?
<hoenir> or should I realease from the "MAAS web interface" that machine?
<hoenir> I tried from the MAAS web interface, my node is "released" but from juju status I'm still getting that machine in pending state any thoughts on this?
<hoenir> juju status
<chrido> hoenir: no thoughts, but if destroy-environment does not work you can try --force
<hoenir> tried to destroy controller but I hangs.. any advice?
<hoenir> I just wanted to reset everything from 0 to bootstrap again
<chrido> with --show-log --force?
<hoenir> on juju destroy-controller --show-log --force mycontroller
<hoenir> like this?
<chrido> yes
<hoenir> "force dosen't exist"
<hoenir> with just --show-log opt it's just "dailing"
<chrido> you are using juju2 i guess, i think its juju kill-controller
<bradm> hoenir: this with juju2 ?  you can try juju2 kill-controller
<hoenir> yeee, it worked..thanks a lot guys!
<hoenir> and yeah I'm using the 2.0 juju version
<hoenir> so what's the primary difference with destroy and kill?
<hoenir> could anyone clarify this?
<bradm> I think kill is the new --force
<hoenir> oh, thanks a lot again !
<chrido> here is some explanation: https://jujucharms.com/docs/devel/commands
<chrido> also destroys the model
<chrido> Does somebody know where in Juju2 you should put the proxy configuration? (Juju 1 had set-env or you could put it in the environments.yaml)
<jamespage> gnuoy, urgh
<jamespage> xenial specific issue:
<jamespage> http://10.245.162.19/test_charm_pipeline_amulet_full/openstack/charm-neutron-gateway/305121/1/2016-04-13_18-21-57/test_charm_amulet_full/juju-stat-tabular-collect.txt
<jamespage> gnuoy, https://bugs.launchpad.net/charms/+source/neutron-gateway/+bug/1547122
<mup> Bug #1547122: xenial: nova-api-metadata not running post deployment <openstack> <xenial> <neutron-gateway (Juju Charms Collection):New> <https://launchpad.net/bugs/1547122>
<jamespage> I rasied a but a while back, but had not hit it since...
<jamespage> systemd is less aggressive with restarting services that shutdown straight away
<gnuoy> jamespage, so you think previously is was crashing straight away but upstart was restarting it till it worked?
<jamespage> gnuoy, yeah
<jamespage> gnuoy, its racey so you don't see it all of the time on systemd based installs - this is the 3rd hit i've seen since I started testing xenial
<gnuoy> jamespage, on the plus side its nice the way workload status calls it out clearly
<jamespage> gnuoy, yeah
<jamespage> gnuoy, ok fixes up for nova-cc and neutron-gateway
<gnuoy> jamespage, kk, thanks
<gnuoy> jamespage, it looks like you're not gating the restart on paused status for neutron-gateway
<jamespage> gnuoy, oh good point
<jacekn_> hello. Can somebody check why my charm is still not recommended in the charm store? It was meant to happen 2 weeks ago. I know ingestion was broken but I'm pretty sure it's fixed now. https://bugs.launchpad.net/charms/+bug/1538573
<mup> Bug #1538573: New collectd subordinate charm <Juju Charms Collection:Fix Released> <https://launchpad.net/bugs/1538573>
<jamespage> gnuoy, that restart service on nonce change is a pattern we should model
<jamespage> but later...
<freak_>  
<jamespage> gnuoy, ok both of the reviews for https://bugs.launchpad.net/charms/+source/neutron-gateway/+bug/1547122 are good
<mup> Bug #1547122: xenial: nova-api-metadata not running post deployment <openstack> <xenial> <neutron-gateway (Juju Charms Collection):In Progress by james-page> <nova-cloud-controller (Juju Charms Collection):In Progress by james-page> <https://launchpad.net/bugs/1547122>
<jamespage> not proposing a full recheck as the change is not specific to xenial
<jamespage> gnuoy, just running smoke testing on staging now so we can promote through to updates...
<jamespage> coreycb, hit a haproxy problem last night which has now been resolved..
<jamespage> dosaboy_, rebased https://review.openstack.org/#/c/300164/ for you
<jamespage> turns out we can all contribute to each others changes !
<jamespage> gnuoy, ^^
<gnuoy> kk
<dosaboy> jamespage: you are too kind
<tinwood> gnuoy, how long should simple_os_checks.py take?  I think I've got a 'stuck' test?
<gnuoy> tinwood, typically 2 or 3 mins
<tinwood> oh.
<tinwood> gnuoy, that's not good then.  I'll have to see where it got stuck.
<gnuoy> tinwood, give me a shout if you want a hand
<tinwood> gnuoy, will do.
<dosaboy> gnuoy: might wanna squeeze https://bugs.launchpad.net/charms/+source/swift-proxy/+bug/1570314 into 16.04
<mup> Bug #1570314: can't set min-part-hours back to zero <swift-proxy (Juju Charms Collection):In Progress by hopem> <https://launchpad.net/bugs/1570314>
<dosaboy> jamespage: ^^
<urulama> jacekn: which is your collectd charm? ingestion fix doesn't touch that part (just makes it fast again and deal with disk space)
<jacekn> urulama: https://jujucharms.com/u/jacekn/collectd/trusty/0 . Marked as "Fix Released" with +1 from marcoceppi here: https://bugs.launchpad.net/charms/+bug/1538573
<mup> Bug #1538573: New collectd subordinate charm <Juju Charms Collection:Fix Released> <https://launchpad.net/bugs/1538573>
<urulama> jacekn: the logs see revision 0, but not newer one
<urulama> jacekn: i'll put it on the list. if you want, you can always push it direclty with the new charm command from juju/devel ppa
<icey> beisner: no idea why that won't merge
<icey> oh yeah I do, it depended on the abandoned C-H sync commit, let me re make the commit on master
<lazyPower> urulama - is there a way that I can flip the necessary bits in launchpad to enable things like bugreporting on a package that doesn't exist in the old launchpad structure? as an example try filing a bug here and watch it complain - https://bugs.launchpad.net/charms/+source/nexentaedge-swift-gw/+filebug
<icey> beisner: https://review.openstack.org/#/c/305780/
<icey> jamespage: this replicates the abandoned merge's functionality
<urulama> lazyPower: not sure i understand correctly. you'd like to have just bug reporting done in LP, but the code would be somewhere else?
<lazyPower> urulama - correct, but i think i'm going to move out of the charms collection to to do this, as there are some restrictions in there that just dont make sense for this application.
<beisner> jamespage, gnuoy - i believe ceilometer is legitimately failing to pause/resume @ wily-liberty (re: https://review.openstack.org/#/c/304188/)
<jamespage> beisner, \o/
<jamespage> craple
<gnuoy> beisner, jamespage, I can grab that and check it out
<beisner> i'd say let's land that amulet test update, because it is identical to ceilometer-agent, except for the pause/resume, and i've not touched that.
<jamespage> gnuoy, I have my test in odl atm
<jamespage> test/head
<jamespage> you can see what its doing to me...
<gnuoy> beisner, +1
<beisner> glad your head is in test, jamespage
<beisner> ;-)
<gnuoy> beisner, would you mind filing a bug for that and I'll grab it?
<beisner> gnuoy, yep sec
<jamespage> beisner, btw I have a ch branch that switches our amulet to github
<jamespage> however
<jamespage> amulet borks on it atm
<jamespage> I tried working the feature for a hour this morning and got tangled - hoping marcoceppi might repay me my upload work for him in amulet features...
<beisner> jamespage, :) saw that wip.  fyi, already had a card in backlog, added your name to it and a link to that review, to revisit in case we don't push that now.
<beisner> it looks like its close
<jamespage> beisner, what package provides juju-test ?
<beisner> charm-tools i believe
<jamespage> beisner, oh
<jamespage> ok
<beisner> gnuoy, https://bugs.launchpad.net/charms/+source/ceilometer/+bug/1570375
<mup> Bug #1570375: pause/resume failing for wily-liberty (blocked: apache2 still running) <uosci> <ceilometer (Juju Charms Collection):New> <https://launchpad.net/bugs/1570375>
<beisner> she's baaaack.  the blessed apache2
<gnuoy> beisner, ta
<beisner> yw gnuoy thx for looking
<beisner> icey, ha!  good catch, i totally missed that review dependency.
<icey> heh yeah beisner, new change up also updates the C-H in tests but should be passing shortly :)
<marcoceppi> jamespage: let me know what features you need, you got it.
<jamespage> marcoceppi, https://github.com/juju/amulet/issues/127
<jamespage> marcoceppi, just uploaded charm 2.1.1 btw
<marcoceppi> jamespage: <3 thank you so much. That's it for packaging from me. I'll just turn everyone else down from this point forward
<marcoceppi> jamespage: where would this show up?
<marcoceppi> jamespage: in like the the Deployment.add() ?
<jamespage> marcoceppi, yah so
 * jamespage looks for references
<jamespage> right now we use the bzr branches on lp
<jamespage> marcoceppi, I want todo https://review.openstack.org/#/c/304477/1/tests/charmhelpers/contrib/openstack/amulet/deployment.py
<jamespage> marcoceppi, the location field gets handed to amulet here
<jamespage> https://review.openstack.org/gitweb?p=openstack/charm-neutron-api.git;a=blob;f=tests/charmhelpers/contrib/amulet/deployment.py;h=d451698d344942d957a922529d7caf352e31f1ec;hb=80108561c5b7dba5b7c62811c66a2d5b69e772f0#l68
<marcoceppi> jamespage: gotchya, I'll see if we can get a patch to amulet this week
<marcoceppi> jamespage: this has ramifications on deployer, which we use to underpin amulet, but I'm sure we can make this happen
<jamespage> marcoceppi, deployer already supports this format :-)
<marcoceppi> jamespage: then this will be easy
<jamespage> beisner, gnuoy: hey so are we going todo the series in metadata thing now that 1.25.5 is out?
<jamespage> it would mean that anyone who wants to use the new charms would have to upgrade to latest 1.25 first...
<lazyPower> jamespage - i tested that using whats in proposed and it still choked on series-in-metadata
<jamespage> lazyPower, well that answers that question - I thought mgz_ had fixed that...
<lazyPower> I dont want to cry wolf if this was fixed since Monday
<rick_h_>  lazyPower jamespage series in medata choked 1.25.5?
<rick_h_> lazyPower: yes, 1.25.5 that went out yesterday has the fix?
<lazyPower> rick_h_ - was that release staged in -proposed ppa?
 * lazyPower pulls latest stable to test
<rick_h_> yes
<lazyPower> ill re-verify, 1 sec
<rick_h_> it was in proposed for a bit and hit released yeterday
<lazyPower> yeah i was using what was in proposed
<lazyPower> and it tanked on me on monday, so lemme flex this again and turn myself into a liar - which is best case scenario here
<rick_h_> lazyPower: k, yes please let me know what the metadata looks like and verify version
<rick_h_> lazyPower: because we specifically held onto 1.25.5 to get that fixed
<lazyPower> ubuntu@charmbox:~$ juju --version
<lazyPower> 1.25.5-trusty-amd64
<lazyPower> rick_h_ - i'm full of lies, looks like it works. Apologies for the noise :)
<lazyPower> jamespage ^
<jamespage> lazyPower, lol
<rick_h_> lazyPower: ok, phew. I'll not go nuts then :)
<marcoceppi> lazyPower: I meant to mention it yesterday at standup, but we tested it at bigdata sprint and it works
<lazyPower> marcoceppi - its weird tho the proposed ppa was choking on  series in metadata so we reverted it. we must have been running on -stable and not -proposed like we thought we were or something
 * lazyPower shrugs
<lazyPower> i'm just happy to see that i was wrong :D
<rick_h_> lazyPower: all good, it happens to the best of us. I mean, there was that one time a few years ago it happened to me :P
<lazyPower> rick_h_ - thats pretty much the bar i set for every week. Find one thing i've been complaining about thats been fixed. I'll gladly be the guy thats always wrong if you keep fixing my bugs :D
<lazyPower> #thingsilearnedfromjorge
<rick_h_> hah
 * rick_h_ sees lazyPower turning that into a coffee table book
<lazyPower> rick_h_ - i thought you were on vacation this week?
<rick_h_> lazyPower: starts tomorrow
<rick_h_> lazyPower: so one more day to bug you all before I go :P
 * marcoceppi gets my long list of last min asks for rick_h_
<lazyPower> rick_h_ i want a pony, can you stuff a pony in before release?
 * aisrael adds to the list
 * rick_h_ notes all lists must be mailed in triplicate and delivered by USPS...no change of it getting here before I'm on a plane now!
<bbaqar> Anyone got a working openstack bundle for xenial?
<jamespage> bbaqar, yeah
<jamespage> one second
<jamespage> bbaqar, I do - I'll try push it to lp today (have a couple of meetings todo)
<jamespage> bbaqar, it will appear here - https://jujucharms.com/u/openstack-charmers-next/openstack-base
<BrunoR> Hi. The submission to the charmstore is described using bazaar, how can I use git instead?
<jcastro> cmars: stokachu: hey guys, any of your layered charms worth polishing off and pushing into review for the store?
<bbaqar> james, much appreciated
<cmars> jcastro, possibly. does that mean i have to use LP?
<jcastro> hah no
<cmars> jcastro, how do i submit a charm for review straight out of CS?
<jcastro> https://jujucharms.com/docs/devel/authors-charm-store
<cmars> sweet
<jcastro> ^^ this will cover you too BrunoR
<cmars> jcastro, cs:~cmars/gogs is close
<cmars> jcastro, i've also got mattermost in devel
<jcastro> I see it, that's awesome.
<jcastro> I'd like to like, tell those projects what you're up to, but ideally it'd be something solid and testable, something they could be proud of if you know what I mean
<jcastro> BrunoR: the new store doesn't care about vcs, you just use the charm command to publish to the store from a working directory
<jcastro> but you will need the latest version of the charm tools from the ppa
<BrunoR> jcastro: ok, thanks. that means I can publish the charms on github and do charm push (with my launchpad-account) from my working directory?
<jcastro> BrunoR: yep.
<jcastro> BrunoR: from there you can basically publish into a devel and stable channels as you see fit
<marcoceppi> cmars jcastro that won't get it submitted for review, that will just get it into the charm store
<jcastro> right
<jcastro> I was just about to get to the review queue portion
<jcastro> so we're working on a new review queue where you'll just submit whatever version you just published in your stable channel for review
<jcastro> and then at some point jujucharms.com/foo will point to your version of the charm
<jcastro> or you can leave it in your personal namespace, depending on what you want.
<tinwood> jamespage, beisner: may have found a bug in the dpdk code in neutron-openvswitch whilst testing dvr (mojo): https://bugs.launchpad.net/charms/+source/neutron-openvswitch/+bug/1570411
<mup> Bug #1570411: custom add_bridge_port(...) function doesn't bring up interface <neutron-openvswitch (Juju Charms Collection):New> <https://launchpad.net/bugs/1570411>
<beisner> tinwood, woot :) appreciate the test spike on that.
<tinwood> beisner, kk :) I'm trying out a fix -- let you know how it goes.
<icey> beisner: you've got 2 community approves on https://code.launchpad.net/~xfactor973/charm-helpers/ceph-pool-replicas/+merge/291827 :)
<beisner> icey, cholcombe - ok thanks guys.  will merge shortly.  will both ceph and ceph-mon need resync'd?
<icey> I believe so, yes
<icey> and thanks beisner!
<icey> also, I'll see about writing a couple of tests to cover this case in the future
<jamespage> tinwood, ugh thats possible
<tinwood> jamespage, I'm trialling a fix at the moment.
<beisner> cholcombe, c-h change landed.  clear to propose re-syncs.  (fyi jhobbs )
<cholcombe> beisner, thanks
<beisner> tyvm cholcombe jhobbs
<cholcombe> beisner, so how many charms do you think are affected?  i think ceph, ceph-mon, cinder, glance, radosgw.  anything else?
<beisner> cholcombe, let's get jamespage 's assessment on that.  i'm not sure.
<cholcombe> ok
<beisner> jamespage, fyi bug 1565120 ^
<mup> Bug #1565120: incorrect replica count in a single-unit ceph deployment <oil> <ceph (Juju Charms Collection):In Progress by xfactor973> <glance (Juju Charms Collection):Invalid> <https://launchpad.net/bugs/1565120>
<jamespage> beisner, cholcombe: does the pool creation happen in the ceph and ceph-mon charms right?
<cholcombe> jamespage, i believe it does
<cholcombe> i'm going to resync both of them
<jamespage> cholcombe, I think that scope makes sense to me
<jamespage> that was the intent of the broker - shove everything serverside in ceph itself...
<cholcombe> so i can skip cinder/glance/radosgw
<jamespage> beisner, ok I switch the xenial test for odl to use BE - point testing that before I do a recheck-all
<jamespage> full rather
<beisner> ack jamespage thx
<jamespage> beisner, graaddhdhdhhdhsafdchjk cvsdjhn]#
<jamespage> 14:43:20 Permission denied (publickey).
<jamespage> 14:43:20 ConnectionReset reading response for 'BzrDir.open_2.1', retrying
<jamespage> 14:43:20 Permission denied (publickey).
<jamespage> on a recheck-full on mitaka-xenial
<jamespage> that's like the last test...
<beisner> wee.   i suspect the IS-outage is affecting us.
<jamespage> ggrrrreeat!
<beisner> also hit rockstar's lxd test
<rockstar> Yup. I was waiting for that to get sorted before charm-recheck
<jamespage> is it just me or has everything been working against us this week....
<rockstar> jamespage: it's not just you, but I don't have the luxury of drinking a beer at the end of the day. :)
<beisner> http://i.imgur.com/KZyNequ.gif
<c0s> guys, any references to the nodejs layer source code? I can not find anything on the github ;(
<lazyPower> stokachu - you did the node layer didn't you?
<lazyPower> c0s https://github.com/battlemidget/juju-layer-node
<stokachu> yea
<c0s> yep, thanks lazyPower - just found it too ;)
<stokachu> c0s: interfaces.juju.solutions points to the upstream git repo too
<c0s> k!
<c0s> thanks stokachu
<stokachu> np, patches very welcome too ;)
<c0s> Sure, but I won't touch nodejs :)
<c0s> I want to take a look at it so I can do puppet layer ;)
<c0s> total noob with charms
<bdx> c0s: whats your strategy on the puppet layer?
<bdx> c0s: heres what I have going on so far for a puppet agent layer -> https://github.com/jamesbeedy/layer-puppet-agent
<jamespage> beisner, oh chicken and egg
<jamespage> I need https://review.openstack.org/#/c/305121/ to land before I can get the xenial amulet tests to pass again...
<jamespage> for odl-controller
<c0s> bdx I am doing a master-less puppet layer, so we can get fixed version of the puppet and (mostly) hiera in the trusty
<c0s> this is will be pretty dumb-ass one: just installing packages from a correct puppetlabs repo
<beisner> jamespage, cholcombe, rockstar - i've squashed the check that has been intermittently causing "ERROR:root:One or more units are not ok (machine 0 ssh check fails)" .. you'll need a recheck if you saw that.   the underlying issue is unidentified, but the bootstrap node is clearly not accessible at the moment we were checking.
<beisner> will add card to revisit and t-shoot as a potential juju issue
<cholcombe> beisner, yeah i saw a bunch of failures that seemed wrong because they passed tests locally
<rockstar> zul: if you please - https://review.openstack.org/#/c/305896/
<beisner> fyi that is re: the charm single test.   compounding that, it looks like several tests were false-failed by an internet/infra hiccup (lp bzr ssh fails)
<beisner> recheck ftw
<zul> rockstar: done
<bdx> c0s: look at my layer
<bdx> c0s: that is exactly what it does
<bdx> c0s: if you `charm build` layer-puppet-agent, you will get a charm that does what you are talking about
<bdx> c0s: I made layer-puppet-agent for that purpose alone
<bbaqar> Getting a ERROR cannot resolve charm URL "cs:xenial/ubuntu": charm not found when i deploy a ubuntu charm or xenial .. where should i pull it from .. should i just use the trusty one?
<marcoceppi> bbaqar: just use the trusty one
<marcoceppi> bbaqar: why do you need the ubuntu charm anyways?
 * marcoceppi is curious
<bbaqar_> What is the user/pass for logging into mysql
<sparkiegeek> bbaqar: https://jujucharms.com/u/landscape/ubuntu/xenial/0 if you need it :)
<marcoceppi> bbaqar: that's created when you create a relation, the root password is available on the unit in /var/lib/mysql/mysql.passwd
<marcoceppi> lazyPower: 2.1.1 backported to juju/stable should be built in a few mins
<lazyPower> \nn/,
<bbaqar_> sparkiegeek: thanks :)
<jhobbs> hey beisner - just curious when this charmhelpers fix will be synced to the -next ceph charm https://code.launchpad.net/~xfactor973/charm-helpers/ceph-pool-replicas/+merge/291827
<jhobbs> any ideas?
<beisner> jhobbs, cholcombe has the syncs proposed and pending CI/functional tests now
<beisner> hoping to have results and land tonight
<jhobbs> beisner: excellent, thanks
<bbaqar_> marcoceppi: there is no /var/lib/mysql/mysql.passwd .. i am on cs:trusty/percona-cluster-32
<marcoceppi> bbaqar: I'm not sure where percona-cluster puts it
<beisner> jhobbs, yw & thx for raising that.  fyi we've got a card to implement that scenario as a regression test before 16.07.
<jhobbs> beisner: np, glad we decided to start testing on -next. regression test sounds good, that will ease my mind about keeping that scenario working
<beisner> jamespage, looks like your neutron-gateway change is tripping over bug 1570032 (raised yesterday, tldr:  all precise is borky on this)
<mup> Bug #1570032: Precise-Icehouse:  publicURL endpoint for network service not found <uosci> <neutron-gateway (Juju Charms Collection):New> <https://launchpad.net/bugs/1570032>
<beisner> thedac, did you have any cycles to poke at that ?^
<thedac> beisner: I have not yet. I'll try and look this afternoon
<beisner> much appreciated thedac
<bbaqar_> in HA why doesn't horizon ask for provider type and segmentation id, when creating a network from admin?
<bbaqar_> Is there any config that needs to be done for it?
<bbaqar_> I can use CLI to make an external network with provider type and segmentation id but why cant i do that in horizon
<c0s> hmm... interestingly it seems that apt layer doesn't work ;(
<marcoceppi> c0s: how so?
<marcoceppi> I use it frequently and works quite well
<marcoceppi> beisner coreycb do you have an answer for bbaqar's question? ^^
<c0s> after apt layer is done there's no sources added to the /etc/apt/sources.apt.d/
<c0s> as the results, wrong version of the package is getting installed
<c0s> marcoceppi: I am using it for 2 install sources, not sure if that causes the problem
<beisner> hi bbaqar_, marcoceppi - i've only done that sort of admin network create/wiring foo via nova and neutron cli.   if the behavior is unexpected, please raise a bug and be sure to include details of what is deployed (openstack version, ubuntu version, bundle if you have it, etc).
<c0s> https://github.com/c0s/juju-layer-puppet/blob/master/config.yaml
<marcoceppi> c0s: remote the quotes?
<marcoceppi> c0s: https://git.launchpad.net/layer-apt/tree/README.md#n28
<c0s> ok, lemme try
<c0s> yeah, I read it...
<tvansteenburgh> options: apt: ....
<tvansteenburgh> aren't you missing the apt key?
<marcoceppi> tvansteenburgh: these are config.yaml not layer.yaml
<c0s> it's right there in the config
<c0s> what marcoceppi said
<rockstar> beisner: could I get a Workflow +2 on this? https://review.openstack.org/#/c/305896/
<c0s> nope, marcoceppi - removing quotes doesn't have any effect
<beisner> rockstar, we'd need to update bundles in o-c-t (and possibly the charm store <-- jamespage ) and i want to make sure we have that all queued up to land along side this.  otherwise deploy tests will start to fail if we land this alone.
<rockstar> beisner: are those gate tests?
<beisner> rockstar, no, but they run every day, and are part of the release process
<rockstar> beisner: do you want me to do that?
<beisner> rockstar, yes please.   lp:openstack-charm-testing    ...the *next* files in the bundles/lxd and bundles/multi-hypervisor dirs will need MPs.   then after release, the *default* files in the same dirs will need MPs.
<beisner> rockstar, fwiw, your gate would be ++many-hours if we did all that on commits ;-)   hence the scheduled runs.
<rockstar> beisner: cool. Branch coming.
<c0s> marcoceppi: is there a need to do configre_sources call from apt layer to make these sources available somehow?
<marcoceppi> c0s: shouldn't
<marcoceppi> c0s: stub wrote the layer, he might be able to help
<c0s> thanks marcoceppi. stub - if you can help to figure out why apt doesn't configure sources in this commit
<c0s>   https://github.com/c0s/juju-layer-puppet/commit/59bdb701af0dc65054bc33e935c993fd97cbb2ab
<c0s> I would be real grateful ;)
<marcoceppi> c0s: layer is off to a good start
<jcastro> hey so marcoceppi
<jcastro> https://jujucharms.com/docs/devel/getting-started
<jcastro> so if get-started ends  up going here I have some concerns
<jcastro> so first off, this is a kickass local/zfs setup, which is fine
<marcoceppi> jcastro: some is an understatement
<jcastro> but it feels a little full on if someone just wants to try it
<marcoceppi> jcastro: yes, but it's not terrible
<jcastro> right
<jcastro> I am talking from a zero-to-workload aspect
<jcastro> obviously if someone is using juju for more than just trying it we'd want to give them this experience
<jcastro> I am just concerned that it's like "want to try juju? step one, ZFS IN YOUR FACE."
<bdx> c0s: you are using the layer apt to install the sources, but not the packages
<jcastro> it's like dang all I wanted to do was fire up mysql
<bdx> c0s: see -> https://github.com/jamesbeedy/layer-kibana
<marcoceppi> jcastro: lets talk Monday about it
<jcastro> ack
<c0s> thanks bdx - checking
<c0s> bdx - I am actually installing the packages in the layer
<c0s> bdx, the call is right here https://github.com/c0s/juju-layer-puppet/blob/master/reactive/puppet.py#L48
<c0s> bdx, I see what you're saying
<c0s> I think you're right - let me try
<bdx> c0s: also, you must run `apt update` after you add the puppet sources
<bdx> that is the root of your issue
<c0s> ok, got it - let do just that.
<bdx> c0s: https://github.com/c0s/juju-layer-puppet/pull/1
<bdx> that should get you going
<bdx> c0s: per ^ -  I said that backwards .. you are installing the packages with apt layer, but not the sources
<bdx> c0s: which shouldn't really matter, but consistency is also nice
<bdx> c0s: also, you will need the packages puppet + puppet-common
<bdx> just puppet doesn't cut it unfortunately
#juju 2016-04-15
<jamespage> tinwood, gnuoy: good morning
<tinwood> good morning :)
<gnuoy> o/
<jamespage> tinwood, mmmm
<jamespage> looking at your port up change
<tinwood> jamespage, yup ?
<freak__>  
<freak__> hi guys
<freak__> is anybody available rightnow who can help me out
<jamespage> tinwood, yeah - your fix is great for a normal port
<freak__>  i'm deploying openstack base bundle
<jamespage> but for a dpdk port, the ip link commands will fail...
<tinwood> jamespage, but broken for other things?
<freak__> through command juju quickstart openstack-base
<freak__>  but i'm facing issue http://paste.ubuntu.com/15845425/
<tinwood> jamespage, oh
<jamespage> tinwood, yah - a smaller fix for now would be to specialize that function for dpdk codepath and use th charmhelpers version for non-dpdk
<tinwood> jamespage, ya, I understand.  On it.
<jamespage> freak__, yeah - your lxc containers look unhappy
<tinwood> jamespage, any reading material on dpdk?
<jamespage> freak__, "failed to retrieve the template to clone"
<jamespage> tinwood, erm
<freak__> thats what i want to know why containers not initializing
<jamespage> tinwood, look at line 360 of that same file
<jamespage> that's the bit that does the dpdk port adds...
<tinwood> jamespage, kk thanks.
<jamespage> freak__, I'd drop onto one of the physical hosts and take a look in /var/log/juju/machine-*.log
<jamespage> might give you a bit more of a clue
<freak__> ok.let me check
<jamespage> the error is coming from lxc - so it might be some sort of firewall egress issue? is the environment your are deploying in network limited in any way?
<freak__> WARNING juju.apiserver.client status.go:679 error fetching public address: public no address
<freak__> my lab subnet is different and i'm doing NATing on my router to go to outside world
<jamespage> bbaqar, around?
<jamespage> freak__, hmm that should be ok
<jamespage> freak__, that public address message is probably ignorable as well
<freak__> jamespage how can i check whether my cluster controller is performing dhcp and dns correctly or not
<freak__> any command to check status of these services
<jamespage> freak__, I'd drop onto one of the deployed machines and check dns is all good that way (dig or suchlike)
<freak__> ok
<freak__> jamespage check this http://paste.ubuntu.com/15845529/
<freak__> i think its working fine as far as dns is concerned
<jamespage> freak__, yeah - that looks fine
<jamespage> hmm
<jamespage> freak__, could you pastebin the output of sudo lxc-ls -f
<freak__> ok
<freak__> http://paste.ubuntu.com/15845565/
<freak__> jamespage http://paste.ubuntu.com/15845565/
<jamespage> freak__, yeah got it thanks
<jamespage> freak__, we need to get a bit more debug output
<freak__> ok
<jamespage> freak__, the juju controller/bootstrap node caches the images for lxc - I just did a quick test with 1.25.5 locally and it worked OK
<jamespage> freak__, ok I think we can do this without re-bootstrapping
<freak__> i think that the issue with lxc service start
<jamespage> freak__, can you do
<jamespage> juju set-env logging-config="<root>=DEBUG;unit=DEBUG"
<jamespage> that will up the debug level across the environment
<jamespage> freak__, and then ssh to one of the nodes and delete the lxc container
<jamespage> freak__,
<jamespage> sudo lxc-destroy --name juju-trusty-lxc-template
<jamespage> and then have another go add adding a lxc container to that machine
<jamespage> freak__, juju add-machine lxc:X (where X is the machine id of the server you just deleted the template from)
<freak__> ok..right now there is some power issue .they are doing maintenance .so nodes are off..i can perform these steps after 40mins..
<jamespage> freak__, okies
<jamespage> lemme know how that goes
<freak__> ok i will definitely inform you after performing these steps
<jamespage> specifically looking for messages from machine-X.log around problems with lxc and specific error messages
<freak__> jamespage ok. i will
<jamespage> gnuoy, gerrit is foobar by the way - apparently no new jobs are being processed...
<gnuoy> oh
<jamespage> gnuoy, infra team are working on it
<gnuoy> kk
<jamespage> bbaqar, when you do appear - could you confirm what testing the updates for the plumgrid charms have had; I don't intened to exercise them myself so dependent on your test results :-)
<jamespage> bbaqar, you also asked me about migration of the plumgrid charms under the openstack project for development
<freak__> jamespage i executed the commands as you mentioned
<freak__> upon running  juju add-machine lxc:0 it showed msg created container 0/lxc/3
<freak__> but in juju status
<freak__> http://paste.ubuntu.com/15846755/
<jamespage> bbaqar, https://review.openstack.org/#/c/232705/ is the change we made to push the core charms under the openstack project
<freak__> jamespage here are machine 0 logs http://paste.ubuntu.com/15846774/
<jamespage> bbaqar, I'd suggest that you follow the same route - I suspect that you will want the group for core-reviewers to be a little different tho
<jamespage> freak__, need a bit more log I think
<freak__> ok
<freak__> which one.tell me the command i will share
<jamespage> freak__, the machine-0 log
<freak__> ok i will take the complete log and share
<freak__> jamespage any website like paste.ubuntu where i can share the file of log i have taken using securecrt
<freak__> coz the file is too long and paste.ubuntu got hung
<jamespage> freak__, a grep for 'container' might be helpful to filter things out
<freak__> u mean cat /var/log/juju/machine0.log | grep container
<freak__> jamespage here is the output http://paste.ubuntu.com/15847017/
<jamespage> freak__, ok
<jamespage> freak__, apparently Invalid argument - setting cmdline failed is a warning only
<jamespage> freak__, lets try starting that container
<freak__> how to start
<jamespage> freak__, lxc-start --name juju-trusty-lxc-template
<freak__> ok
<jamespage> sudo lxc-start --name juju-trusty-lxc-template
<jamespage> that will happen in the foreground - hopefully that might give us a clue
<freak__> jamespage here is the issue http://paste.ubuntu.com/15847100/
<jamespage> freak__, yeah still not much info
<jamespage> freak__, try again with "-l DEBUG"
<jamespage> that might give us more
<sparkiegeek> and -F for foreground starting?
<jamespage> all I can think is the cached image in juju is foobar
<jamespage> sparkiegeek, I thought that was the default but I might be wrong
<jamespage> freak__, add "-F" as well to be sure...
<sparkiegeek> jamespage: nah, default is daemon
<jamespage> sparkiegeek, oh
<freak__> jamespage check this http://paste.ubuntu.com/15847157/
<bbaqar> jamespage: Thanks for the help. Bundles in all the charms tests deploy smoothly.
<jamespage> freak__, needs to be -F not -f
<freak__> ok
<jamespage> freak__, and DEBUG not debug apparently...
<bbaqar> jamespage: have not ran the amulet test though in the last two weeks .. but they run the same bundle that i deploy every day
<jamespage> bbaqar, ok
<bbaqar> jamespage: and test for the same things i test every day
<jamespage> bbaqar, do you have a bundle in the charm-store for plumgrid?
<jamespage> gnuoy, https://review.openstack.org/#/c/304668/ could you review please
<jamespage> finally passing amulet tests...
<freak__> http://paste.ubuntu.com/15847193/
<bbaqar> jamespage: yes we do .. but it has to be updated as soon as we land the charm MPs https://jujucharms.com/plumgrid-ons/bundle/9
<jamespage> bbaqar, yah - I see some of the config options are changing names - that will break upgraders but I suspect your install base is know right now :-)
<gnuoy> jamespage, will do
<freak__> jamespage while bootstrapping there is the option disable juju network management i checked this option do you think this bridge issue is appearing to that
<jamespage> freak__, right - that's usefull
<jamespage> freak__, hmmm
<jamespage> freak__, and you do 'ip addr' on that unit please
<jamespage> freak__, yes I suspect that is breaking things badly...
<bbaqar> jamespage: no one is on those revision .. so we are good
<jamespage> bbaqar, okay - next on my list
<freak__> so do you suggest to bootstrap again and enable that option
<jamespage> freak__, yes
<freak__> i think that will work
<freak__> ok i will try and then get back to you
<jamespage> freak__, could you raise a bug against juju for this as well - it really should tell you that you can't have lxc containers if that option is enabled
<jamespage> https://bugs.launchpad.net/juju-core/+filebug
<jamespage> that would have saved us 2 hours of debug...
<freak__> ok i will
<jamespage> freak__, thankyou :-)
<jamespage> ping me the link once raised and I'll mark it as effects me tooo...
<freak__> thanks to you as well for strong support..ok i will share link here with you
<bbaqar> jamespage: appreciate the help.
<jamespage> freak__, no problem - just as a heads up we're a week off  the next charm release that will support 16.04 and the openstack mitaka release
<jamespage> freak__, so if you're planning ahead something to think about
<jamespage> freak__, you can upgrade liberty->mitaka but not in-place 14.04 -> 16.04
<jamespage> using the charms that is
<jamespage> bbaqar, for migration under the openstack project, you'll want to add tox configurations for all of your charms...
<jamespage> bbaqar, look at any of the core charms for hints on that
<jamespage> bbaqar, any general summary for the commit messages on these?
<jamespage> bbaqar, and remind me to talk to you about direct charm store publishing soon....
<bbaqar> #let me add a commit message right now.
<bbaqar> jamespage: I ll add a commit message right now
<freak__> jamespage here is the bug link https://bugs.launchpad.net/juju-core/+bug/1570796
<mup> Bug #1570796: container startup issue when juju network management disabled <juju-core:New> <https://launchpad.net/bugs/1570796>
<jamespage> bbaqar, ta
<jamespage> bbaqar, here is fine - vim is open waiting for me to type...
<bbaqar> jamespage: just one mine
<jamespage> gnuoy, nearly have odl-controller passing on xenial
<jamespage> soooo close
<bbaqar> jamespage
<bbaqar> jamespage: support added for plumgrid 4.1.3 and 5.0 releases | support added for configurable external interfaces | support added for separate fabric network (os-data-network)
<bbaqar> jamespage . use this for plumgrid-director, plumgrid-edge, plumgrid-gateway
<gnuoy> jamespage, excellent
<bbaqar> jamespage: for neutron-api-plumgrid: support added for plumgrid 4.1.3 and 5.0 releases
<jamespage> bbaqar, as a future improvement, you might have external-interfaces be a list of mac addresses to use across the deployment
<jamespage> bbaqar, there is code in charm-helpers to resolve mac -> interface
<bbaqar> jamespage: haha ... i wish i had known this earlier ..
<jamespage> bbaqar, I suspect some of the challenges we have across SDN solutions are common
<bbaqar> jamespage: i have seen the function actually .. might be difficult for the users to collect all mac-addresses for each interface on scale deployments
<jamespage> bbaqar, well maas has them all :-)
<jamespage> bbaqar, as a future feature, juju may grow support for presenting the interface directly via network spaces
<bbaqar> jamespage: You are right. and yes i saw some development on spaces .. will rethink this before pushing xenial charms
<jamespage> bbaqar, sure
<jamespage> it might be good to schedule a general MAAS 2.0/juju 2.0 update for your team
<jamespage> there will be alot of docs being updated - prob best to wait for that
<jamespage> bbaqar, ok all landed...
<jamespage> lazyPower, ^^
<jamespage> fyi
<jamespage> gnuoy, "Controller configuration on bridge br-int incorrect: ![u'tcp:172.17.115.171:6653']! != !tcp:172.17.115.171:6633!"
<bbaqar> jamespage: thanks alot.
<bbaqar> jamespage: For the commit into openstack project. What upstream location should we use? I see all openstack charms are in https://github.com/openstack-charmers/ ... should i keep them in our plumgrid github space?
<jamespage> bbaqar, that was just a staging area
<jamespage> any git location for the source of the charms is fine
<bbaqar> jamespage:got it
<freak__> jamespage are you there?
<freak__> jamespage i bootstrapped the environment again and enabled the option juju network management ..this time the lxc issue resolved
<freak__> this is the current status some components are struck http://paste.ubuntu.com/15848673/
<jamespage> freak__, looks like lxc containers might be still coming up
<freak__> ok. i will wait
<jamespage> freak__, whats the disk io like on your servers?
<jamespage> tinwood, your proposed fix for n-ovs looks good to me - is that testing ok in the dvr spec for you?
<tinwood> jamespage, I'm just getting it sorted now - let you know when it's done xenial/mitaka?
<jamespage> tinwood, okies
<freak__> jamespage its /dev/sda
<jamespage> tinwood, cause I know the amulet tests won't exercise that stuff at-all
<tinwood> jamespage, indeed.
<freak__> jamespage  here is the detail http://paste.ubuntu.com/15848759/
 * lazyPower read backscroll
<lazyPower> jamespage niiiiceee!
<webscholar> ..
<lazyPower> jamespage - we threw down some docs about that as well https://jujucharms.com/docs/devel/authors-charm-store
<lazyPower> re: publishing
<jamespage> lazyPower, awesome
<freak__> jamespage still the status is same ...lxc are in allocating state
<freak__> jamespage here is the current status http://paste.ubuntu.com/15849141/
<freak__> is it normal to take so much time allocating?
<jamespage> freak__, not quite sure whats happening tbh
<jamespage> freak__, no - i suspect something else
<jamespage> freak__, can you check the status of the lxc containers on the machines?
<jamespage> sudo lxs-ls
<jamespage> sudo lxs-ls -f actuall is better
<freak__> ok
<freak__> jamespage   http://paste.ubuntu.com/15849161/
<jamespage> freak__, sudo !!
<jamespage> needs root
<freak__> ohh...right   http://paste.ubuntu.com/15849174/
<jamespage> freak__, thats a good start
<freak__> yes that alright... but have you noticed template at the end is stopped
<freak__> jamespage what you think could be the issue in allocation of lxc
<jamespage> freak__, yeah - the template being stopped is fine
<freak__> my current status
<freak__> http://paste.ubuntu.com/15849258/
<freak__> jamespage i found this bug
<freak__> https://bugs.launchpad.net/juju/+bug/998238
<mup> Bug #998238: local provider unit agents get stuck in pending state because of host firewall blocking communication <firewall> <local-provider> <pyjuju:Triaged> <juju-core:Triaged> <https://launchpad.net/bugs/998238>
<freak__> they are saying disable ufw
<freak__> and then destroy environment and build again
<freak__> http://askubuntu.com/questions/134977/juju-stuck-in-pending-state-when-using-lxc
<jamespage> freak__, I doubt its enabled if the physical machines are maas deployed
<freak__> how to check ?
<freak__> whether its enabled or not
<sparkiegeek> freak__:  can you SSH on to the machines that are still pending and paste /var/log/cloud-init-output.log and /var/log/juju/*
<freak__> sparkiegeek , ok i will
<freak__> sparkiegeek , here is the output http://paste.ubuntu.com/15849429/
<sparkiegeek> freak__: that looks like it's all healthy - how about /var/log/juju/all-machines.log ?
<freak__> sparkiegeek, ok i will share
<jamespage> tinwood, recheck-full not really required for the dpdk ports fix
<tinwood> jamespage, I guess I'm just a bit paranoid.  Sorry.
<tinwood> One of the other charms broke on wily/liberty because it wasn't enabled when I did pause/resume.
<jamespage> tinwood, np
<jamespage> freak__, sorry - doing about 5 different things at once so apologies
<jamespage> freak__, thats the juju machine 0 cloud-init output
<tinwood> jamespage, xenial/mitaka works with the ovs change.  Just checking trusty/kilo now. (for dvr).
<freak__> jamespage , no issue.. you guys at canonical are very supportive.. salute you
<jamespage> freak__, you should be able to ssh directly to the lxc units  - say ssh ubuntu@192.168.6.164
<beisner> cholcombe, yah, ceph-radosgw amulet full is failing @ master in the same way as on your review.   http://pastebin.ubuntu.com/15849481/   got ideas?
<freak__> ok. let me ssh
<jamespage> freak__, you might not be able to
<jamespage> lets see
<cory_fu> aisrael: Thanks for charm-tools#182!  I had one review comment / request, but awesome that you took that on.  :)
<freak__> jamespage i was able to do ssh to lxc here is the output http://paste.ubuntu.com/15849549/
<jamespage> freak__, is 192.168.11.193 the ip address of machine 0?
<freak__> jamespage yes its the ip on machine 0 vlan 11
<jamespage> freak__, oh wait - thats a different subnet to the lxc containers? can they actually access that IP ?
<freak__> jamespage , they are on same machine
<jamespage> freak__, can you explain your networking a bit please
<aisrael> cory_fu: My pleasure. I just updated the pull request. I'm thinking I should also update the proof command to do similar wrt colourized output
<freak__> on machine 0 i have given ip 192.168.6.193 on eth 0 in maas, on eth1 no ip ,, i have created vlans/subinterfaces
<freak__> from vlan 11-16
<freak__> for update :: from lxc on machine 0 i cannot ping 192.168.11.193
<cory_fu> aisrael: I think that would be nice, but also a somewhat larger undertaking
<freak__> but on machine 0 i also have the ip 192.168.6.193
<freak__> so lxc should communicate through that
<freak__> i mean 192.168.6.193 on machine 0 can communicate to 192.168.6.164 on lxc
<jamespage> freak__, yeah - I see what you mean
<jamespage> freak__, so the vlan interfaces are all trunked in over eth0 on machine 0 right?
<jamespage> and you did that via MAAS I'm guessing...
<freak__> eth1 is trunk
<jamespage> oh right - sorry
<freak__> and vlans are configure on eth1
<jamespage> okies...
<jamespage> dimitern, hey - need some support here ^^ how does juju decide which IP address of machine 0 to use for machine agents to communicate with?
<jamespage> context here is that the lxc containers being deployed are trying to address an IP address of machine 0 which they can't actually route to
<jamespage> despite the fact that machine 0 has an IP on the same subnet
<jamespage> dimitern, 1.25.5 juju release - freak__: MAAS 1.9?
<freak__> jamespage maas version is 1.9
<jamespage> ta
<jamespage> freak__, bear with us on this one
<freak__> jamespage ,  no issue i will wait
<cory_fu> aisrael: Oh, I actually missed that was proof and not lint (a separate issue I created) and thought it would just be a blanket colorization.  Hrm.  I just realized that, technically, the intention of INFO in proof vs build is different and green might not be the best color for I from proof.  But I don't see another, better alternative
<jamespage> tinwood, ok - thats good enough for me - pushing that through
<jamespage> freak__, can you check for me what 'node0.maas' resolves to please
<freak__> jamespage , here is the output http://paste.ubuntu.com/15849691/
<jamespage> freak__, ack thanks - that looks right to me
<jamespage> .6 not .11
<freak__> jamespage the ip of region controller is 192.168.6.11 and of cluster controller is 192.168.6.12
<freak__> and then the node0 ip is 192.168.6.193
<jamespage> freak__, +1
<freak__> if i run dig node0.maas on machine 0 for that output is this http://paste.ubuntu.com/15849691/
<freak__> if you want dig on region or cluster controller i can also share that if you say
<tinwood> jamespage, yep, kilo passed too.  Good call. :)
<cory_fu> aisrael: Merged.  :)
<aisrael> cory_fu: <3
<freak__> jamespage 192.168.11.193 ip should be reachable from lxc .here is the snip of routing table http://paste.ubuntu.com/15850132/
<jamespage> freak__, 192.168.6.1 can route between .6 and .11 ?
<freak__> i think i should change the gateway ip to 192.168.6.193
<freak__> that would be much better
<freak__> jamespage i changed the default gw from 6.1 to 6.193
<freak__> now from lxc i can ping to 11.193
<jamespage> freak__, I suspect that's not a great idea
<jamespage> they might be able to get to the bootstrap node, but I suspect the rest of the world is not inaccessible
<freak__> jamespage you are right but in that case lxc should choose 6.193 ip to go to outside world
<jamespage> freak__, not really
<freak__> do not choose 11.193
<jamespage> a default gateway should be enough most of the time
<jamespage> the problem is the way the machine agent on the physical machine is configuring the machine agents in the lxc containers to get their tools
<icey> with juju2, how do I setup an apt-http-proxy for an openstack cloud? I have it configured in my cloud.yaml that I imported the cloud settings with but it doesn't seem to be using it
<freak__> jamespage how can i force lxc to resume its process of downloading tools which was interrupted to inaccessible ip 11.193
<tvansteenburgh> cory_fu: when creating a layer, when should one create a config.yaml, versus just exposing options in layer.yaml? is that documented somewhere?
<jamespage> freak__, hmm
<jamespage> freak__, restarting cloud-init might work
<jamespage> my other suggestion was to reboot the container - but that might reset your temp route
<freak__> jamespage if we make the route permanent by modifying networks file
<freak__> then restart the container
<freak__> btw how we will restart container
<cory_fu> tvansteenburgh: No, I don't think layer options are well documented yet.  Basically, layer options are a replacement for, e.g., apache.yaml from the apache-php layer, so that we don't have an explosion of .yaml files.  So, they're intended for a charm layer to influence what a base layer does, whereas config.yaml is for user-facing options
<jamespage> freak__, well maybe but I'm concerned that will then just break something else
<jamespage> freak__, reboot inside the container...
<freak__> nice :P
<tvansteenburgh> cory_fu: oh, right :)
<tvansteenburgh> cory_fu: i forgot that config.yaml was just part of the charm :P
<cory_fu> :)
<shruthima> Hi All,We have developed IBM-Installation Manager in bash and   i have declared ibm_im_package option in layer.yaml but iam unable to fetch it ?   can you please suggest any command is there to fetch layer options ??
<lazyPower> shruthima - layer options are documented here https://jujucharms.com/docs/devel/reference-layer-yaml
<lazyPower> ensure you've implemented them as defined in that reference guide, you can then fetch the layer options like we do in this example: https://github.com/mbruzek/layer-storage/blob/master/reactive/storage.py#L113
<lazyPower> shruthima - in addition to that guide, ensure you're also running the latest version of charm-tools (2.1.2) so you've got hte required builder modifications to support options in layer.yaml
<shruthima> we have written code in bash ? so any bash example can you provide if available ?
<shruthima> <lazyPower>Can you please provide any example written in bash code ?
<lazyPower> shruthima - ah good point i'm not sure if thats exposed via the bash CLI
<lazyPower> cory_fu i'm pretty certain it is, do you happen to know if that was added to the bash helpers?
<aisrael> jcastro: ping
<cory_fu> lazyPower, shruthima: Actually, layer options aren't actually exposed by bash CLI. :(  It shouldn't be too hard to add, and it would be in the base layer, so won't require much to get it released.
<lazyPower> oh, good to know!
<lazyPower> thanks cory_fu
<cory_fu> However, I'm pretty busy today, with the last day of the big data team sprint, so I'm not sure if I'll be able to get it done today
<shruthima> oh k thanku lazyPower cory_fu
<cory_fu> shruthima: I will send out an email to the juju list when I can get that done
<shruthima> cory_fu : Thanku :)
<bdx> hows it going everyone? I've a few jaas/charmstore questions for anyone listening .... 1. How can I delete stale charms that I don't want to show up in the charmstore anymore (ones in the legacy charmstore that got pulled in from my lp code, and new jaas namespace charms)?  2. How can I create a channel ?  3. (juju2) When I add users with write access to a model, they don't seem to be able to `juju ssh` into
<bdx> any deployed machines; do I need to each new users ssh key, if so, which key?
<lazyPower> bdx - you were asking about channels - have a look at the docs over the charm store + channels here https://jujucharms.com/docs/devel/authors-charm-store
<bdx> lazyPower: perfect, thx
<lazyPower> bdx - regarding removing of charms - you can only remove charms that have been uploaded to the new store. The old store is like an icebox, think of those charms as being in stasis.
<bdx> I'll offer an award of $50 and a bucket of beer to charmstore admin who can remove some old, stale, failed charm attempts for me :-)
<magicaltrout> i'll give them some pork scratchings and a pint of ale
<lazyPower> magicaltrout - hows session planning going for apachecon?
<bdx> who could resist? someone will break .... just give them time
<lazyPower> bdx - good things come to those who are patient
<magicaltrout> bleh in between contract work on saiku, contract work for nasa and apachecon my life is a big mush of loads to do
<lazyPower> magicaltrout - Fair enough :) was poking as a leading question to offer eyeballs for review/help if any of that would be helpful to your goals there
<magicaltrout> its coming on, the Juju Data Management talk will be pretty straight forward (famous last words) I'm working on the tutorial content and presentation slides for all of them at the mo
<lazyPower> ack, just lmk when/if i can be of help
<magicaltrout> my plan is to get the planning and presentation stuff done betwen now and the 29th which is dealing for my real job, and after that I should have 2 weeks to get all the technical stuff done for apachecon
<magicaltrout> transpires Q1 2016 is pretty chaotic ;)
<magicaltrout> lazyPower: last year I was out at 2am playing very drunk beach volleyball, went to bed at 5 got up at 9 and wrote my presentation for 11
<magicaltrout> ever the professional.....
<magicaltrout> wont be happening this year with my tutorial load
<tinwood> gnuoy, interesting.  The amulet for wily/liberty isn't stopping apache2
<gnuoy> oh, well that's be it then
<tinwood> gnuoy, except now it has.
 * tinwood thinks this is weird.
<tinwood> gnuoy, might be finger trouble on my side.  More investigation.
<gnuoy> good luck
<tinwood> gnuoy, and just to verify, you were just doing amulet tests with a bootstrapped juju only?
<gnuoy> tinwood, yep
<tinwood> gnuoy, kk, ta
<freak_> jamespage i made the default gw updated route to 6.193 permanent
<freak_> and restarted the lxc
<freak_> now when i see the cloud-init-output.log
<freak_> here is the output http://paste.ubuntu.com/15852704/
<freak_> it updated the route info only but not resumed the process
<beisner> icey, cholcombe - bug 1570960
<mup> Bug #1570960: ceph-osd stuck in a mon-relation-changed infinite loop <uosci> <ceph-osd (Juju Charms Collection):New> <https://launchpad.net/bugs/1570960>
<icey> that is not stuck... or in a mon relation changed loop
<icey> it's in an update-status loop, which is by design?
<beisner> icey, well it goes on and on back to mon-relation-changed then apt installs again and again
<beisner> icey, yah so my quick assessment may not be right on the actual issue.  the symptom is solid though:  ceph-osd blocks forever on trusty-icehouse
<beisner> icey, bug updated
<icey> beisner: I can look after lunch
<cmars> icey, cholcombe question about the ceph charm.. should I be able to deploy it to lxd controller? anything special that needs to be done to get it working in containers?
<cholcombe> cmars, it should just work.  are you running into issues?
<cmars> i've tried using directories for the osds, but they always seem to be stuck in HEALTH_WARN
<cholcombe> can you paste the warning for me?
<cmars> cholcombe, yep, i'll try again and get you an error message from a fresh deploy
<cholcombe> ok thanks
<cmars> cholcombe, here's what I'm seeing: https://paste.ubuntu.com/15854608/
<cholcombe> cmars, i've seen that before with instances that  have disks that are too small.  your ceph osd tree will show that all the osds are weighted at 0 instead of 1 like they should be
<cmars> cholcombe, they're all running in containers, so they all think they have the same usage & free space as the host
<cmars> cholcombe, is that the problem, perhaps?
<cholcombe> could be.  what does your ceph osd tree look like?
<cmars> cholcombe, how do I show that?
<cholcombe> just type `ceph osd tree` :)
<webscholar> Hi dimiter
<webscholar> are you around?
<webscholar> james how are you?
<webscholar> jamespage need some help
<c0s> bdx: I am sorry - I just seen your PR for the puppet layer, which I have essentially implemented after talking to you here yesterday
<c0s> Wasn't trying to steal your idea, really.
<beisner> thedac, can you cr+2 wf+1 this?  neutron-api-odl unit and lint were failing @master.  fyi, our ci fails b/c this charm has no amulet test yet.  https://review.openstack.org/#/c/306552/
<thedac> beisner: I'll take a look
<beisner> thx thedac
<beisner> thedac, for context:  https://review.openstack.org/#/q/topic:pbr-reqs
<cory_fu> Is it true that local provider doesn't work on centos?  I didn't think that was the case, but had someone say that it was
<thedac> beisner: approved
<beisner> cholcombe, do we have a bug opened for the cache tier failure?
<cholcombe> beisner, not yet i think
<beisner> cholcombe, ok i'll raise
<beisner> cholcombe, is this the --yes-i-really-mean-it thing? ;-)
<bbaqar> jamespage: is the next branch of keystone merged into stable yet, for mitaka? I would like to get a small commit in. Am I in time?
<beisner> hi bbaqar - charm feature freeze was last week.  we'll be releasing next to stable late next week.  the only things we can land into next before then are critical bugfixes and test updates.
<bbaqar> jamespage: okay no worries.
<jamespage> beisner, https://review.openstack.org/#/c/305065/
<jamespage> UOSCI says go...
<freak_> hi everyone
<freak_> need help regarding lxc containers
<cholcombe> beisner, yeah it's the --yes-i-really-mean-it bs
<beisner> cholcombe, bug 1571050
<mup> Bug #1571050: remove-cache-tier action failing @mitaka <uosci> <ceph (Juju Charms Collection):New for xfactor973> <ceph-mon (Juju Charms Collection):New for xfactor973> <https://launchpad.net/bugs/1571050>
<cholcombe> beisner, yeah i'm on it :)
<beisner> sweet thx sir
<cholcombe> beisner, the --yes-i-really-mean-it flag fixes it.  i'll have a patch up soon
<beisner> cholcombe, awesome
<cholcombe> beisner, https://code.launchpad.net/~xfactor973/charm-helpers/ceph-jewel-flag/+merge/292049
<cholcombe> we'll have to resync the charms again :-/
<beisner> cholcombe, c-h merged.   on these 2, i'd say just do a resync as a new patchset:
<beisner> https://review.openstack.org/#/c/305922/
<beisner> https://review.openstack.org/#/c/305933/
<beisner> cholcombe, then new reviews for any others, if any
<cholcombe> beisner, good idea
<cholcombe> beisner, i think ceph-mon and ceph are the only ones that need it since they actually run the commands
<beisner> woot
<cholcombe> i have a new patchset up for ceph-mon
<cholcombe> beisner, for the future i'm pondering pulling out the cli calls and making api calls instead
<cholcombe> so we can get out of this every iteration breaks things loop
<beisner> but you just made your own api!  :-)
<cholcombe> hehe
<cholcombe> i mean swapping out the cli calls for librados calls
<beisner> yah i'm just giving you shenanigans
<cholcombe> it's not hard but it'll require lots of typing :)
<beisner> ha!
<cholcombe> and a massive refactor which i'll have to hold off till 16.10 for
<beisner> ok so, gonna do a charm-ceph review too cholcombe ?
<cholcombe> yeah
<cholcombe> it's up
<beisner> cool.  i think this should 'just land' as its the same result as master:    https://review.openstack.org/#/c/305922/   <- cholcombe ?
<cholcombe> yeah
<beisner> boom
<cholcombe> damn it's a merge frenzy
<beisner> thedac, dang and we just revalidated amulet-full rmq @master.  well you know you'll have a good baseline  ;-)
<thedac> yeah, I suspect few people were doing ssl or we would have heard the yelling on this one
<beisner> thedac, odd though, the rmq amulet test flips ssl on and off several times then sends/check amqp messages
<thedac> beisner: subsequent to install though.
<beisner> ah but it has first settled in a non-ssl config, so everything is installed by the time we flip it
<beisner> yeah
<thedac> With the bundle it is set before install
<cholcombe> haha beisner i love your --yes-i-really-approve-it
<beisner> lolz
<beisner> thedac, cholcombe - amulet-smoke is clear on rmq, ceph, ceph-mon.  imho, land-worthy
<cholcombe> sweet!
<beisner> nothin like landing after 5p on a fri, yah?
<thedac> beisner: thanks. I'll wait for a review Monday unless you are handing out +2s
<beisner> thedac, did you exercise that in the ssl mojo spec?
<thedac> beisner: a version of that yes
<thedac> ddellav: actually this would help you. Do you want to test it out? https://review.openstack.org/#/c/306628/
<ddellav> ah nice thedac
<beisner> thedac, ddellav - yah the mojo spec is really our only mechanism to exercise that.   it lgtm though!
<beisner> thedac, i'd be inclined to land her now tbh
<thedac> ddellav: I'll leave it to you. We'll land it after you test it
<ddellav> thedac whats the best way to incorporate that fix into my mojo spec?
<beisner> i don't think a refspec is consumable in mojo, though it is consumable in juju-deployer
<thedac> oooh, good question. Because we need to grab the ref specs.
<thedac> Yeah, not sure it is doable in any easy kind of way
<beisner> ddellav, thedac - looks like it'd take a fork, refspec fetch, merge, push, and repoint the spec's collect at the fork
<ddellav> beisner ew
<thedac> beisner: ddellav this is what I tested with. http://pastebin.ubuntu.com/15860862/ That both produced the problem and showed that it was fixed.
<thedac> honestly I think that is enough
<ddellav> thedac ok, ill run with that
<thedac> with a local copy
<beisner> thedac, me too.   the only hold up i've got is:    is the ctl file always /usr/sbin/rabbitmqctl, from precise --> xenial?
<thedac> as far as I know. Also we use the same test elsewhere in the charm
<beisner> i'm sold
<cholcombe> i'm thinking of adding librados to charmhelpers.  Is it a bad idea to import python libraries that require an apt-get install ?
<cholcombe> looks like the README.test already contains a bunch of libraries.  Seems like it's ok
#juju 2016-04-16
<tinwood> Quick fix anybody?  juju debug-hooks isn't working, does: 'failed to connect to server: Connection refused' and only connects to root on the unit.
<bbaqar_> (trusty-mitaka) neutron net-list returns: "An auth plugin is required to fetch a token" ... http://bazaar.launchpad.net/~ost-maintainers/openstack-charm-testing/trunk/view/head:/novarc
<ejat> hi .. i just notice there is new package juju-1
<ejat> fenris@x240:~â« juju
<ejat> juju: command not found
<ejat> 127 fenris@x240:~â« whereis juju
<ejat> juju: /usr/lib/juju
<ejat> fenris@x240:~â« ls /usr/lib/juju
<ejat> juju/      juju-1.25/
<ejat> now need to use juju-1 ?
<pmatulis> ejat: you can if you want
#juju 2017-04-10
<kklimonda> is charms.reactive and layering "the way" to write new charms?
<hloeung> kklimonda: yeah. As I understand, it's also a requirement to get charms accepted into the charmstore / official
<mwhudson> axw, menn0: you'll be getting the go 1.8.1 snap on the next refresh, hope it works :)
<axw> mwhudson: thanks!
<axw> mwhudson: I'll let you know if I have any more issues
<menn0> mwhudson: sweet!
<menn0> mwhudson: what changes in this?
<mwhudson> menn0: dunno!
<mwhudson> menn0: https://golang.org/doc/devel/release.html#go1.8.minor
<kjackal> good morning juju world!
<kjackal> hloeung: kklimonda: you can push to the store under your namespace any charm (reactive or not). Also, as far as I know the use of reactive is not a requirement (eg I have reviewed charms in perl ). The requirement is that the charm has to be reviewable and with enough tests
<hloeung> kjackal: ah, thanks for clearing that up
<kjackal> hloeung: kklimonda: hloeung when you mention "charmstore / official" you mean charm that have gone through the review process and become recomended. This is why these charms have to have tests
<kjackal> hloeung: kklimonda: you can push charms under your namespace without any restrictions and unattended
<kklimonda> [insert a comment about fragmenting community]
<kklimonda> kjackal: so, as long as it's not a part of a larger project (like openstack-charms) everything goes?
<kklimonda> I was a bit confused about charms.reactive given that it's mentioned in docs, and yet pypi release is from 2015 - but now I see that there are newer releases on github
<kjackal> kklimonda: you can push a charm that uses/extends openstack charms/bundles. Your charms will be available under cs:~kklimonda/my-awesome-charm . If you want to partner with Canonical - prtnership means have canonical recomend your charm, have shared marketing events etc- you will have to go through a charm review process. This process includes "charm schools" meetings were our engineers show you how charms work are tested best
<kjackal>  practices etc. As soon as charm is reviewed and accesped as recomneded ("promulgated") the charm is served from cs:my-awesome-charm
<kjackal> kklimonda: why would that fragment the comunity?
<kklimonda> kjackal: you've mentioned charm written in perl, I've seen examples of bash charms, someone was discussing pupper-based charm, and I've seen code for wrapping ansible. That doesn't strike me as a desirable trait, given that the barrier of entry to contribute and/or fix charms is basically unbounded.
<kklimonda> kjackal: even in my time with openstack (and related) charms I've seen 3 ways of defining apt repositories, with subtly different syntax.
<kjackal> kklimonda: It is true that Juju does not stand behind a single language. There are are some (strongly) recomended  practices like reactive. The reason for this is that Juju is not here to install your software on a machine. Instead Juju will handle the lifecycle of your software and your infrastructure as a whole. What I mean by that is that you have puppet, ansimble, python, perl, bash scripts that would do the configuration
<kjackal> and deployment of your software. You should be able to reuse the (operational) logic you have for managing a service. When this logic gets complicated (see for example openstack) Juju gives you a great environment to model your infrastructure and manges its lifecycle. Think of it like juju is a higher level set of abstructions that allow you to model and manage the changing states of your infrastructure
<kklimonda> kjackal: so the idea is that you can either start from scratch, or wrap your existing internal tools in juju, and have it working with as little modifications as possible?
<kjackal> kklimonda: Yes. And as you get into more complicated states you should align your operational lodgic to match the abstractions juju suggests (mainly charms and relations)
<kjackal> kklimonda: let me also point out that Juju shines when you have dynamic environemnts where charms interact with each-other. From a theoretical perspective Juju allows you to do a service choreography althogh most of the time the term orchestration is used to describe what juju can do..... I am taking this too far... I should stop here
<kklimonda> mhm, interactions between charms - provisioning users, sharing secrets, are indeed pretty cool
<kklimonda> what about orchestrating applications on top of kubernetes cluster with juju? There's a lot of chatter about juju kubernetes distribution, but not much about how to provision stuff on top of the cluster.
<kjackal> kklimonda: at the moment we do not have a sulution for delivering and orchestrating apps on kubernetes
<BlackDex> blahdeblah: Do you know if it is possible to add two nrpe plugins to the same main-charm?
<farfetchd> Hi All, we are using the new juju library 0.3.0 were the AddMachine method was implemented. we are facing some issues: can somebody help? here is the log: https://p.rrbone.net/paste/BKUdtoIr#twKArUtuTYMelvtKi0NmwXezdSImPiyCYtzwNuySxcd
<farfetchd> what we are executing is here: https://p.rrbone.net/paste/4V3MquPk#CEjpqUArvyiS1QA2APxpaezgP7WZNqdGAeR64sOmsZc
<blahdeblah> BlackDex: I *think* so, as long as they use two different application names, but I've never tried it, nor am I aware of any instances of it being used.
<cnf> hmm
<cnf> proxy use in juju is a mess
<Zic> do you know if running "juju config kubernetes-worker install_from_upstream=true" on a CDK production cluster is possible? what Juju will do, upgrading Docker everywhere or one worker by one worker?
<Zic> (oh sorry, forget to say hello :'()
<Zic> forgot*
<kjackal> Hi Zic looking at https://github.com/juju-solutions/layer-docker/blob/master/reactive/docker.py#L85 this will change docker under the hood. I think it will run on all workers at the same time.
<Zic> kjackal: ok, so I will plan a maintenance upgrade at night, thanks :)
<Zic> just for info, I tried it on our preproduction cluster, all is working except I needed to restart kubelet on every kubernetes-worker
<Zic> all green after that in juju status :)
<cnf> hmz
<cnf> juju is a mess when you need it to be behind a proxy :/
<BlackDex> blahdeblah: Oke, cool, maybe i will try. If i do, i will let you know :)
<cnf> any idea how i can make juju retry provisioning a container?
<cnf> this is very much a painpoint in juju, imo
<BlackDex> provisioning of the container?
<BlackDex> did it start the container?
<cnf> it can't _start_ it
<cnf> it couldn't even download it
<cnf> i set the proxy, for now, so i need to tell it to retry it
<cnf> and after that, remove the proxy again
<BlackDex> you could try to restart the jujud machine unit
<BlackDex> is the machine in error state?
<cnf> no, nothing is in error
<cnf> hence why i can't poke anything
<cnf> i can do neither juju resolved nor Juju retry-provisioning
<cnf> i'm kinda stuck
<stub> I haven't needed to retry - it seems to do it automatically as soon as things are right (eg. fixing my VPN so I have network connectivity again, or bouncing my web proxy so it binds to the correct IP address)
<cnf> yeah, it's not moving
<cnf> at all
<stub> It isn't very informative about it, but works. ie. it just sits there unhelpful until I diagnose the problem , myself and fix it
<stub> Bouncing the jujuds on the controller would be a good kick in the teeth, but I have no idea what side effects that would have
<cnf> hmz
<stub> I have seen it sit there for a long, long time if images are being downloaded over a slow link
<cnf> it's not doing anything
<cnf> and it's not a slow link
<stub> juju destroy-machine --force and retry now you have sorted your proxy?
<cnf> yeah,no, because it needs to NOT have a proxy set for some things, and a proxy set for others
<cnf> because juju is stupid that way
<cnf> and resetting a machine takes 20+ minutes
<stub> There are separate settings for apt proxy vs http proxy if that helps
<cnf> no, it doesn't
<cnf> because lxd doesn't use the apt proxy
<cnf> and setting http-proxy sets it for _everything_
<stub> Oh, I'm dealing with the lxd provider rather than lxd containers on a machine so I'm unsure if my experience counts here
<stub> yeah, I've had to add specific proxy config options to some charms (eg. the proxy for the snap layer to use for snap store requests)
<cnf> i keep putting in places where it doesn't know how to recover
<cnf> not confidence inspiring
<cnf> brb, got to pick up a package
<kjackal> Zic could you open a ticket to track down the issue of needing to restart kubelet? https://github.com/juju-solutions/bundle-canonical-kubernetes/issues
<cnf> k, disk swapped
<cnf> stub: i don't know how to configure proxy for lxd, but not system wide...
<cnf> if i set system wide proxy, openstack won't work
<cnf> and no_proxy doesn't understand CIDR
<cnf> so i'd have to add every single IP possible, which is a long list
<cnf> https://bugs.launchpad.net/juju/+bug/1488139 is relevant, btw
<mup> Bug #1488139: juju should add nodes IPs to no-proxy list <landscape> <network> <oil> <proxy> <juju:Triaged> <https://launchpad.net/bugs/1488139>
<cnf> so i don't know how to solve this... apt has apt-http-proxy, but lxd doesn't use that
<cnf> and i don't know how to make juju retry the lxd setup, ffs >,<
<stub> #openstack-charms might have insight. I vaguely recall this being discussed before re: no_proxy
<cnf> stub: yes, with me
<cnf> hmz, it seems it just isn't possible
<cnf> juju stops trying when it can't pull a container
<cnf> logs don't show anything, either
<cnf> just not doing anything
<cnf> ugh, this is stupid
<cnf> juju retry-provisioning should support containers
<Zic> do you have any release date planned for CDK with K8s 1.6?
<kjackal> Zic: We are finishing up our testing today. If all goes well we should be releasing it tomorrow
<kjackal> Zic: it should be 1.6.1
<kjackal> Zic: what are you looking to find in 1.6?
<lazyPower> o/ Zic
<Zic> kjackal: just a question of my customer, nothing particular in my concern :)
<Zic> hi lazyPower :)
<kwmonroe> cnf: not sure if it'll help, but here's a shortcut for getting all lxd subnet IPs into no-proxy when adding a model:  juju add-model foo --config no-proxy=`echo 10.x.y.{1..255} | sed 's/ /,/g'`
<cnf> kwmonroe: times 4, for each subnet
<kwmonroe> or.. `echo 10.x.{a,b,c,d}.{1..255} | sed 's/ /,/g'`
<cnf> kwmonroe: i'll probably do something like that to get me going, thanks
<cnf> but it should be fixed in juju, though
<kwmonroe> agreed cnf -- good find on 1488139.  that feels like the right bug to get this fixed under.  after reading the final comment there, i'm not sure what the ramifications of adding so many IPs to no_proxy will be for you.  and i'm sure at some point, there's a limit to the length of data that add-model will allow :/
<kwmonroe> still, hopefully it'll get you moving
<cnf> uhu, i think my 1681495 is relevant too, though
<cnf> even if you just want to cache lxd images at the edge
<cnf> admcleod:
<cnf> oops
<kwmonroe> yeah cnf - i like your idea of lxd-proxy in addition to apt, http, etc
 * cnf nods
<cnf> btw, you don't know of a way to solve https://bugs.launchpad.net/juju/+bug/1681435, do you?
<mup> Bug #1681435: juju retry-provisioning should support containers <juju:New> <https://launchpad.net/bugs/1681435>
<kwmonroe> cnf: afraid i don't
<cnf> k
<kwmonroe> cnf: is rebooting the machine housing the lxd units an option (juju run --machine X 'sudo reboot')?  i wonder if juju would attempt some kind of retry if the controller <-> host lxd communication gets bounced.
<cnf> kwmonroe: i didn, doesn't help
<kwmonroe> boo
<arosales> Cynerva: lazyPower: any success on https://github.com/juju-solutions/layer-etcd/issues/89
<arosales> hopefully that is the last issue needing resolved for 1.6.1
<Cynerva> arosales: we have a fix merged in and tested
<arosales> thats an awesome answer :-)
<arosales> sorry I missed that I was following updates in https://github.com/juju-solutions/layer-etcd/issues/89
<arosales> but good stuff
<arosales> mbruzek: by chance, have you built canonical-kubernetes with the latest bits in the candidate channel?
<lazyPower> arosales: thats what we're doing now, we're releasing to candidate to test
<mbruzek> arosales: we are doing that now, should be available in minutes
<arosales> solid
<arosales> sounds like we may have a  good canonical-k8 to test
<arosales> Cynerva: lazyPower: mbruzek thanks
<mbruzek> arosales: we will ping you when done.
<arosales> mbruzek: thanks could you also give larrymi a ping. I think he was trying to get some run time on maas.
<arosales> thanks
<mbruzek> Great
<mbruzek> arosales: We just pushed to candidate
<larrymi> arosales, mbruzek: working on it now
<arosales> mbruzek: thanks
<arosales> larrymi: candidate channel should have the latest bits, and should be a good bundle to test on maas with
<larrymi> arosales, cool will use those.
<arosales> larrymi: thanks or the testing, ping if you have any issues
<larrymi> will do arosales
#juju 2017-04-11
<viswesn> Why I am not able to see juju model-config updated value - http://pastebin.ubuntu.com/24358872/
<kjackal> Good morning Juju world!
<erik_lonroth> good morning.
<joedborg> morning! o/
<joedborg> Quick juju question - will 2.2 have the same behaviour as 2.1.2 in terms of not automatically bridging interfaces?
<cnf> in a model yml file, can I define to: lxd, and not specifiy what machine it should put it on?
<rick_h> cnf: you can for a machine in the bundle, but not an existing machine.
<cnf> rick_h: right, that's what i meant
<cnf> what is the correct syntax?
<cnf> how do you delete subordinate units?
<tvansteenburgh> cnf: juju remove-unit
<anrah_> is there a reason why 2.0.X are not listed on juju-stable repository?
<cnf> tvansteenburgh: not for subordinates
<tvansteenburgh> cnf: that's news to me, why doesn't it work?
<cnf> i don't know, it tells me remove-unit doesn't work with subordinates
<tvansteenburgh> cnf: huh. i never realized that. but: https://jujucharms.com/docs/2.0/authors-subordinate-services#caveats
<cnf> hmm
<lazyPower> cnf: is this via the CLI or via the Gui?
<lazyPower> cnf: subordinates are a special case where they presume to exist for the lifecycle of the related unit. I've had some talks with rick_h about this in the past, but I don't think we've settled on anything beyond we have differing opinions about subordinate units and their lifecycle.
<lazyPower> and when i say related unit, i mean principal unit.
<cnf> cli
<cnf> i uh, never use the gui?
<cnf> i don't understand it
<rick_h> cnf: so there's an issue in that subordinates are colocated on the root machine and could do things like install PPAs, modify packages, etc. Such that figuring out what's the subordinates changes vs the host application is impossible.
<rick_h> cnf: anything that claimed to "remove the subordinate" would be a lie you can't trust in your model. It's dirty at that point
<rick_h> cnf: so yea, we go the route that to get a clean host application you need to redeploy it w/o the subordinate relation
<cnf> rick_h: right
<cnf> rick_h: so i did a silly, and installed 2 applications on 1 machine that _both_ had ntp as a subordinate
<cnf> so they where fighting over who got to config it
<cnf> rick_h: anyway, they are no longer on the same machine, but thanks for the explanation
<rick_h> cnf: yea, sorry. I agree that it'd be cooler if we could make that cleaner. Honestly, I think things like snaps help in this world a little bit.
<rick_h> cnf: but we'd rather be a bit more up front about a known possible spot than have folks chasing why something is failing in production for a non-obvious reason that there used to be a subordinate that didn't remove cleanly there.
<cnf> rick_h: sure
<cnf> rick_h: it's not always clear where the problem is coming from :P
<rick_h> cnf: yea, it's a bit of magic managing I guess.
<Trefex> hi guys, is there any information on what Ceph-Dash looks like? https://jujucharms.com/u/canonical-storage/ceph-dash/
<rick_h> cholcombe: ping ^
<cholcombe> Trefex: you're looking for screenshots?
<Trefex> cholcombe: yeah for instance, i found a ceph-dash from Crapworks
<cholcombe> Trefex: ah yes.  this one is a different than the crapworks dash
<Trefex> cholcombe: but I have no idea what ceph-dash does, why I should pay for it (I guess it's behind canonical paywall) and what the rest of the architecture should look like
<Trefex> eg telegraf, collector, influx, etc
<cholcombe> Trefex: yeah that's an excellent question. i'll work on getting some nice screenshots prepared :)
<Trefex> cholcombe: wohooo thanks :)
<pranav> Hi Folks. Need help with the cinder charm. getting the following error in cinder-scheduler log
<pranav> oslo_service.service RPCVersionCapError: Requested message version, 3.0 is incompatible.
<pranav> any pointers
<pranav> couldn't find anything on google
<bdx> pranav: ask in #openstack-charms
<pranav> Already did. Still waiting there. Thanks
<Budgie^Smore> o/ juju world
<rick_h> anyone know a twitter handle for tengu/merlijn's folk?
<SimonKLB> is it possible to change the profile image of a team in jujucharms.com ?
<urulama> SimonKLB: the profile is actually a gravatar defined by an email. we could add an option that one can set that for the team, but we there no such functionality atm
<cnf> kwmonroe: so setting no_proxy didn't help, btw
<smgoller> Hey all, I'm trying to bootstrap a juju controller on our local openstack cloud, and it's not clear to me what the answer is to the question "Enter the API endpoint url for the cloud"
<smgoller> The docs mention port 35357, which I can't find any reference to in the openstack documentation
<hml> smgoller: use your keystone endpoint
<hml> smgoller: i believe 35357 is the port for authenication v3, 5000 is for v2
<smgoller> ahhh
<hml> smgoller: the same endpoint can be used for both questions related to endpoints in add-cloud
<smgoller> failed to bootstrap model: no image metadata found
<smgoller> is it expecting an image with a particular name to exist?
<smgoller> figured it out
<smgoller> thanks!
#juju 2017-04-12
<oops> hi, anyone knows how to get /proc/sys/net/bridge/bridge-nf-call-iptables inside an LXD? I've added br_netfilter to the LXD config, but it didn't help.
<cnf> morningmorning
<cnf> hmm, https://bugs.launchpad.net/juju/+bug/1681495 is getting no love :/
<mup> Bug #1681495: add juju lxd proxy configuration <juju:New> <https://launchpad.net/bugs/1681495>
<cnf> also https://bugs.launchpad.net/juju/+bug/1681435
<mup> Bug #1681435: juju retry-provisioning should support containers <juju:New> <https://launchpad.net/bugs/1681435>
<rick_h> cnf: yea, not there yet
<cnf> rick_h: on which one?
<cnf> 1681495 is severely hampering my ability to deploy atm :/
<cnf> hmz, so can't deploy with a proxy set, can't deploy without one set
<cnf> great
<cnf> rick_h: the combination of those 2 really has me stuck in a catch22
<rick_h> cnf: :( understand.
<rick_h> juju show 15ish minute warning!
<rick_h> get your coffee refills now
<cholcombe> Trefex: so basically the reason to install ceph-dash from canonical is that is decodes ceph's rpc messages over the wire and can tell you the read/write sizes and what objects they're writing to.  There's a lot of other stuff it shows but that's the big differentiator
<rick_h> H/O is https://hangouts.google.com/hangouts/_/3bgmb2u3szfunetelpz5m6olnqe and the stream to view is https://www.youtube.com/watch?v=V_VhVffbooU for those coming along for the ride.
<rick_h> marcoceppi: jrwren lazyPower hatch ^
<marcoceppi> rick_h: in an airport, next time ;)
<rick_h> marcoceppi: doh safe travels
<rick_h> hopefully somewhere nice :)
<rick_h> mbarnett: ^
<rick_h> sorry, mbruzek
<rick_h> tab-fail
<mbarnett> np
<mbruzek> rick_h: We are kind of busy right now
<mbruzek> I may not be able to join
<rick_h> mbruzek: ok, all good
<marcoceppi> rick_h: home, which is always nice
<mbruzek> I hit the wrong button
#juju 2017-04-13
<Zic> hi here, do you have any progress on Kubernetes 1.6 for CDK? I'm kinda excited to test it, as some feature of K8s in 1.6 will really simplify our lives :p (I don't know how to check your advancement except by lurking the juju-solutions GitHub)
<sparkiegeek> https://insights.ubuntu.com/2017/04/12/general-availability-of-kubernetes-1-6-on-ubuntu/ ?
<Zic> sparkiegeek: oh, thanks, did not notice that :)
<Zic> I'm reading the procedure to upgrade from CDK 1.5.3 -> what do you mean by "Wait until the software upgrade is complete before migrating the etcd data (a manual step)."
<Zic> do I need to restore the etcd cluster like this? https://github.com/coreos/etcd/blob/master/Documentation/op-guide/recovery.md#restoring-a-cluster
<Zic> cc lazyPower / kjackal when you will be around :)
<Trefex> cholcombe: ok great thanks
<tvansteenburgh> Zic: are you asking what is meant by "wait until the software upgrade is complete"?
<Zic> tvansteenburgh: nop, the "before migrating the etcd data (a manual step)"
<Zic> does this mean that I need to restore my etcd data manually?
<Zic> or that Juju covers it with the "snap-upgrade" extra-step?
 * lazyPower reads backscroll
<lazyPower> sorry, getting in freenode a bit late this morning :) o/
<lazyPower> Zic: hey there, nope. that migration we refer to is a charm upgrade (wait for the charm uprade to complete) then run the manual action specified by the status message
<lazyPower> Zic: we moved etcd to snap based delivery, and its a downtime incurring event. Total time down should be ~ 5 minutes, so not crazy required downtime, but something to plan for for sure.
<Zic> lazyPower: oh cool indeed, I thought I must do the restore myself :(
<Zic> was a bit stressed up about it since etcd and me are not good friends
<lazyPower> Zic: not at all
<lazyPower> just make sure you snapshot and fetch it. the upgrade process does do a checkpoint, but get in the habit of snapshotting before making operational changes to etcd :)
<Zic> lazyPower: fantastic :p thanks for all this automation :)
<lazyPower> Zic: also, if you're curious about the upgrade, i do encourage you to deploy revision 24, and then upgrade to 29
<lazyPower> see what its doing and become familiar with what you'll be doing in prod
<Zic> thanks
<lazyPower> Zic: and this unlocks the pathway to etcd 3.x
<lazyPower> juju config etcd channel=3.0/stable, juju config etcd channel=3.1/stable  -- this process (waiting for each config-chagned event to complete ofcourse) will automatically ingest the next series, and upgrade the cluster through the major versions. Would love feedback on your experience there
<lazyPower> note: 3.0 is mandatory stopgap to do the config/data format change. this is outlined in teh readme.
<Zic> my next question was actually how to force Juju to deploy an old pre-1.6 version of CDK to train myself :D
<lazyPower> 2.x => 3.1 is not supported at this time, I'll look into migrating those scripts in 3.0 to 3.1 so it gets carried forward
<lazyPower> ah
<lazyPower> Zic: you should be able to use the older bundle revision
<Zic> so I have my answer :p
<lazyPower> yep :)
<lazyPower> we do support channel configs to deploy the older k8s. i think we only have 1.5 and 1.6 in the snaps tore
<Zic> yup, by specify it in the Juju GUI at the charm level
<lazyPower> Cynerva: ryebot:   can you confirm this?
<ryebot> catching up
<ryebot> lazyPower Zic: that's correct
<lazyPower> fantastic, thanks ryebot
<lazyPower> I didn't think we had any other versions in there but its good to confirm with the peeps that did the heavy lifting
<ryebot> lazyPower Zic: I don't believe downgrading from 1.6 to 1.5 works, ftr, but deploy from a custom bundle with channel configuration should work.
<Zic> ryebot: oh no, my question was not about downgrade, simply force an old version for a fresh-install
<ryebot> Zic: That should work well, I tested that personally
<Zic> to train/test myself for the D Day for the upgrade in production :)
<Zic> (mostly the etcd2 (deb) -> etcd3 (snap) part which feared me a bit)
<Zic> fearing*
<Zic> as we're really in prod now
<Zic> lazyPower: what's the purpose of kube-control?
<Zic> (saw it in new/deprecated juju relations)
<lazyPower> This interface provides communication between master and workers in a Kubernetes cluster.
<lazyPower> at this time we're using it to send DNS information and expose information for the GPU based workloads
<lazyPower> instead of interface proliferation it made sense to abstract into an administrative interface
<Zic> thanks for the info :)
#juju 2017-04-14
<cnf> morning
<Budgie^Smore> o/ juju world
<rick_h> party
<jrwren> Budgie^Smore: is your nick at all related to the desktop by that name?
<Budgie^Smore> jrwren not that i am aware of, this has been my IRC nick since 97
<jrwren> Budgie^Smore: ok, just curious.
<Budgie^Smore> jrwren no worries :) now you have me curious... budgie has been my nick since elementary
<jac_cplane> what irc channel can I use to get help with charm command
<rick_h> jac_cplane: you can ask here. A lot of folks on holiday today so kind of quiet
<jac_cplane> understood. thanks
<cholcombe> rick_h: i deployed a reactive subordinate charm onto a ceph-mon xenial box and it seems to be using python2.  is that expected behavior?  the ceph-mon charm is not reactive
<jac_cplane> ok - so I am part of a charm group ~cplane-team
<jac_cplane> we have charms published in the charm store
<rick_h> cholcombe: not sure. I would think that the reactive subordinate would carry it's own python version with it as a dep, but I've not tried it so not sure.
<cholcombe> rick_h: ok
<jac_cplane> but when I issue "charm list" I recieve a message that says "No charms found"
<rick_h> jac_cplane: k, so it only checks for your username ootb and you need to specify the -u flag for others like https://pastebin.canonical.com/185983/
<rick_h> jac_cplane: and as long as you have permissions you should be able to push to those charms with the correct charmstore urls e.g. cs:~cplane-team/trusty/cplane-controller and such
<jac_cplane> Yes - I can push just fine.   but cannot list.  so I was just wondering if there was a problem with my account.  btw - I cannot access pastbin
<rick_h> jac_cplane: oh my bad
<rick_h> jac_cplane: try http://paste.ubuntu.com/24381761/
<jac_cplane> @rick_h thanks.  !
<rick_h> jac_cplane: np, let me know if that doesn't help you do what you're looking for.
<rick_h> any juju folks, please check out and share along my latest "using juju for good stuff" blog post http://mitechie.com/blog/2017/4/12/three-reasons-you-need-to-keep-a-vpn-in-your-pocket ty!
<Budgie^Smore> rick_h did I miss reason #1 in that link? I see the setup and reason #2 and #3 but ...
<rick_h> Budgie^Smore: well the setup is kind of reason one (private connection)
<rick_h> Budgie^Smore: I didn't call it out like that so it was sneaky and unclear :P
<Budgie^Smore> rick_h that is what I thought ... but I have learned a long time ago not to assume anything
<Budgie^Smore> rick_h oh I hate people who suggest remapping port 22! security by obscurity isn't security!
<rick_h> Budgie^Smore: :)
<rick_h> Budgie^Smore: but I get it all the time. "You shouldn't run ssh on port 22..." bug in today's world of transient machines kind of a pita
<Budgie^Smore> rick_h you could add a bounce on how to setup your cloud instance fw to block traffic from all IPs but your vpn
<Budgie^Smore> rick_h as I have helped run 30% of the world's web traffic, I dare anyone to tell me that and not get laughed out of the room... 250k servers / 2.4k+ data centers / 100+ countries, I know what I am talking about :)
<rick_h> Budgie^Smore: :)
<jac_cplane> when specifying a charm in a bundle, is it necessary to specify the version of the charm -0, -1 ?   will it alway take the latest if no version is specified
<rick_h> jac_cplane: the bundle will grab the latest stable released version if no version is specified
<jac_cplane> awesome - thanks rick
<rick_h> jac_cplane: the version can be encouraged to make sure the deploy is exactly what you expect if you're doing testing/repeatable deploys, but if it's just a general solution and gets tested before charms are updated should be just fine w/o a version number
#juju 2017-04-15
<erik_lonroth_> Hello, I'm trying to get started with charm development but the documentation isn't really working. I'm starting here and it fails pretty immediately on me doing "snap install charm"
<erik_lonroth_> https://jujucharms.com/docs/stable/tools-charm-tools
<erik_lonroth_> ... which then ends up with "error: snap "charm" requires classic or confinement override
#juju 2017-04-16
<rick_h> erik_lonroth: sorry, we'll get the docs updated. You need to add a --classic flag to the install command
#juju 2018-04-09
<wolfmitchell> bdx, ^
<wolfmitchell> bdx, if you mean multiple spaces in MAAS I have everything in a single space
<wolfmitchell> also, adding a subnet in maas does nothing to make it stop
<wolfmitchell> also, it seems like whichever node is used for neutron-gateway does offline after it's installed .-.
<wolfmitchell> although that's more of a question for #openstack i guess
<thumper> wolfmitchell: which version of maas and juju ?
<wolfmitchell> Juju 2.3.5-bionic-amd64
<wolfmitchell>  with maas 2.4.0~beta1
<wolfmitchell> .. crud it copied the newline
<wolfmitchell> w/e
<thumper> wolfmitchell: can you deploy a much simpler model with lxd containers on maas?
<thumper> like mysql/wordpress or mediawiki or something
<wolfmitchell> yeah, i'll try this
<wolfmitchell> https://jujucharms.com/mediawiki/19
<wolfmitchell> sorry, should have pasted the link lol
<wolfmitchell> thumper, getting that same error, uploading a pastebin of the model's yml rn
<wolfmitchell> http://upaste.me/4a2a4958751d1aee3
<thumper> what does your maas network setup look like?
<wolfmitchell> this is a virtual setup just to learn MAAS, Juju, and Openstack, everything is connected to the same bridge interface on the host
<wolfmitchell> thumper, ^
<thumper> hmm...
<thumper> what networks do you have defined in maas?
<thumper> I think juju expects at least something
<wolfmitchell> I have a single subnet (10.0.0.0/8) defined
<thumper> hmm...
<thumper> I'm a little confused
<wolfmitchell> how so?
<thumper> wolfmitchell: we have someone coming on in a few hours that has more experience with this
<thumper> wolfmitchell: just outside what I deal with mostly
<wolfmitchell> crud, would they be on around about this time tomorrow (minus 5h or so)
<thumper> wolfmitchell: also, confused because simple setups should just work
<wolfmitchell> since I have to get off uh
<wolfmitchell> soon(tm)
<wolfmitchell> like 7min
<wolfmitchell> i have work early tomorrow
<thumper> try first think tomorrow your time, failing that, file a bug with all info
<thumper> we follow those
<wolfmitchell> mmk
<wolfmitchell> anyway i'm getting off
<wolfmitchell> thanks for helping thumper
<thumper> sorry i wasn't more help
<wolfmitchell> still, thanks anyway :P
<wolfmitchell> thumper, just found out i can go in to work a couple hours later tomorrow, maintainance on the door card access system thing is happening and nobody can get in for the first couple hours of the day so... i have another hour til i have to go now i guess
<wolfmitchell> thumper, should I try redeploying MAAS and everything though?
<thumper> wolfmitchell: I'd wait for jam
<thumper> he isn't online yet
<wolfmitchell> how long do you think it'd be until they're on, thumper?
<thumper> soonish...
<thumper> but I have a meeting with them in an hour
<wolfmitchell> ah ok
<wolfmitchell> tbh i'm prob just gonna redeploy it lol, it won't take that long to do
<wolfmitchell> i'll wait til tomorrow first
<thumper> k
<wolfmitchell> ... i just looked, apperantly i used an iso for ubuntu 18.04, not 16.04
<wolfmitchell> that... may cause some issues
<wolfmitchell> lol
<jam> wolfmitchell: "could not find unused subnet" is because LXD is trying to find a random IP address range in the 10.0.0.0/16 range. And since you've already allocated all of 10.0.0.0/8 we're unable to do so.
<jam> Now, why we would be allocating a random one, rather than bridging to your host network, etc, is something still worth figuring out
<wolfmitchell> jam, is there a way to change what range it uses?
<wolfmitchell> possibly to something in 172.16/12?
<jam> wolfmitchell: if you're using containers underneath MAAS I wouldn't think Juju would be trying to create an lxdbr0 in the first place, but that we'd just bridge the network device of the host machine
<jam> wolfmitchell: there isn't a way to change the prefix that Juju looks at, unfortunately, but we should definitely figure out why we're searching at all.
<wolfmitchell> jam, should I try splitting our 10/8 to just the subnets under that we're using as a temporary workaround? or would DHCP not hand out 10/8 as the subnet being used for clients?
<wolfmitchell> jam, what if I change this line of code to use a diff subnet? https://github.com/juju/juju/blob/master/container/lxd/initialisation_linux.go#L364
<wolfmitchell> lol
<wolfmitchell> jam, so what i'm gonna do is just redo my lab to use 172.16.0.0/16 anyway since i was planning on moving away from 10.0.0.0/8 anyway (it caused issues with my work vpn)
#juju 2018-04-10
<cynthiaoneill> Is there a working jujuclient written in golang?
<thumper> cynthiaoneill: hey there
<thumper> cynthiaoneill: juju is written in golang
<thumper> why?
<thumper> I'm not sure what you are after
<cynthiaoneill> We are writing a kubernetes CRD and controller which is written in golang.  We will need to run juju commands from the controller, or from a jujuclient container
#juju 2018-04-11
<cynthiaoneill> It would be nice if there was a jujuclient in go that can create a new client and issue commands from it
<thumper> cynthiaoneill: there is the api package from github.com/juju/juju that is what the juju CLI uses
<thumper> cynthiaoneill: but we haven't gotten around to pulling that out of the main package
<thumper> main repo that is
 * thumper -> lunch
<cynthiaoneill> thumper: thanks! weâll take a look at it.
<stub> Also https://launchpad.net/juju-api-client
<thumper> stub: that seems to be private
<rick_h_> oooh, look at that pretty profile UX on JAAS now
<rick_h_> and that shell in the gui for all button wheeee
 * rick_h_ talks up stuff while he writes emails and preps juju show notes/demos (2hrs away!)
<cory_fu> rick_h_: Unfortunately, I have an appointment at the same time as the show, so won't be able to join to talk about https://jujucharms.com/u/johnsca/aws/ (note: still a bit in progress, hence in my personal namespace, but is usable)
<rick_h_> cory_fu: k, I used it to hack up a demo of route53'ing the dns-name used in the jujushell charm for let's encrypt support
<rick_h_> cory_fu: I think your aws charm is a different beast and something that would be great to talk about in a future show perhaps
<cory_fu> rick_h_: Wait, what did you use for the demo?  Not the charm?  Just the trust feature?
<rick_h_> cory_fu: yea, I'm just talking about hte trust feature today
<cory_fu> kk
<rick_h_> cory_fu: something to get the people's minds interested in 2.4 testing and such :)
<rick_h_> so your stuff was a great kicking off point for me to check out some existing stuff
<rick_h_> juju show in 20min warning!
<rick_h_> kwmonroe: bdx hml magicaltrout zeestrat ^ fyi
<rick_h_> links coming soon
<rick_h_> the "watch the show and eat popcorn" link https://www.youtube.com/watch?v=FehQJhqx6co
<rick_h_> if you want to join the conversation the link is "https://hangouts.google.com/hangouts/_/suy4w5z7pjaz3faa3zz6eq32cae"
<zeestrat> rick_h_: You up for random strangers like me?
<rick_h_> zeestrat: feel free
<rick_h_> always welcome some community folks
<rick_h_> just click that second link zeestrat
<zeestrat> Jupp. Let me just sort out the audio
<kwmonroe> rick_h_: show notes for k8s 1.10:  changelog https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG-1.10.md
<rick_h_> ty for the link kwmonroe
<zeestrat> rick_h_: Thanks for the invite. A+, would do again!
<rick_h_> zeestrat: lol thanks! We'll be back in 14 days
<zeestrat> rick_h_: Already in the calendar!
#juju 2018-04-12
<imrankhakoo> hello, i'm having issues bootstrapping a bionic juju controller. I'm not seeing anything obviously wrong in  the cloud-init logs of the bootstrap node, but it's stuck with the following output:
<imrankhakoo> Launching controller instance(s) on maas...
<imrankhakoo>  - xbcq4n (arch=amd64 mem=4G cores=1)
<imrankhakoo> Installing Juju agent on bootstrap instance
<imrankhakoo> Fetching Juju GUI 2.12.1
<imrankhakoo> Waiting for address
<imrankhakoo> Attempting to connect to 10.1.1.118:22
<imrankhakoo> Connected to 10.1.1.118
<imrankhakoo> Running machine configuration script...
<imrankhakoo> any ideas? This worked fine 2 weeks ago. i did do an apt upgrade in the meantime, so possibly a regression?
<gnuoy> Am I right in thinking there is no way to influence where juju looks locally for a charm? I have a bundle that references a local copy of a charm and I'd like the ability to influence where juju looks for that local charm. There used to be JUJU_REPOSITORY env variable or juju deploy --repository= but it seems both are gone
<gnuoy> https://jujucharms.com/docs/2.3/reference-environment-variables#juju_repository-(deprecated)
<balloons> gnuoy, you can specify path to the charm when deploying. Does that help?
<gnuoy> balloons, in a bundle ?
<balloons> imrankhakoo, does the container have networking connectivity if you ssh in?
<balloons> imrankhakoo, are you running lxd 3.0 now?
<gnuoy> balloons, actually, the desire is that the bundle has "charm: ./myapp" rather than any other path
<balloons> gnuoy, I think this will help: https://jujucharms.com/docs/2.3/charms-offline-deploying
<gnuoy> balloons, I've been a bit vague about my use case, sorry. Basically I want to include an example bundle with a charm and be able to run it irrespective of where the charm has been downloaded too. I had originally used a relative path in the bundle "charm: ../../app" but I'd rather do "charm: ./app" and somehow manipualte where juju looks for the local charm. I'm really after exactly what JUJU_REPOSITORY env variable used to do
<cory_fu> jamespage: Any objection to me cutting a release of charmhelpers?  It seems that https://github.com/juju/charm-helpers/pull/150 at least is critical for several charms, including CDK
<balloons> gnuoy, sadly I don't know the answer, but I think it should be possible
<cory_fu> I'll take that as "no problem"  ;)
<cory_fu> Cynerva, tvansteenburgh: 0.18.8 is release
<cory_fu> d
<Cynerva> thanks cory_fu
#juju 2018-04-13
<imran_khakoo> I'm seeing "E: Unable to locate package juju-mongodb3.2" when I bootstrap a new 18.04 controller. Any news on when this package will be available?
<parlos> Good Morning!
<magicaltrout2> hello folks
<magicaltrout2> amulet
<magicaltrout2> its asking me for juju-deployer
<magicaltrout2> what faux pas am I making?
<magicaltrout2> rick_h_: any clue?
<magicaltrout2> are you supposed to use juju-deployer with juju 2.x & amulet?
<rick_h_> magicaltrout2: otp, but I think that amulet never lost the deployer dep so it's working as expected?
<magicaltrout2> well deployer wants paths that don't exist in juju 2.0 like .juju/environments.yaml etc
<cmars> hi, does CDK & ceph work with azure storage? i'm trying to deploy ceph and attach storage, but it never seems to detect the block devices
<kwmonroe> magicaltrout: yeah, amulet will use juju-deployer.  get it with pip3 install juju-deployer.
<kwmonroe> magicaltrout: when you want to run it with juju 2.x, make sure you pass the -e <controller>:<model
<rmcd> list
<kwmonroe> magicaltrout: if you're just running amulet tests directly, it should be smart enough to use the current model.  but if you're calling juju-deployer in your own script, make sure you stick in the -e.
<magicaltrout> hmm thanks kwmonroe i'll give it another prod later
<hml> zeestrat: ping
<zeestrat> hml: pong
<hml> zeestrat: hi - question on your email , what cloud are you bootstrapping?
<zeestrat> hml: Localhost/lxd. Can't remember if I checked MAAS.
<hml> zeestrat: 2.4-beta1 is almost out the door, so iâm not sure anything can be added.  iâd recommend emailing the juju mail list for discussion and geting more spotlight on it.
<zeestrat> hml: Sounds good. Thanks for taking a look :)
<hml> zeestrat: iâve also been playing the lxc profiles
<hml> see if i can get constraints working for you that way
<hml> but that limits individual memory constraints to a single model
<hml> not optimal
<zeestrat> hml: Yeah, the problem is multiple different mem constraints in the same model. The workaround we have works so don't stress.
<hml> :-)
<zeestrat> hml: Thanks again. Have a nice weekend!
<hml> zeestrat: you too!
<ananke> was wondering if somebody could help me out figuring out general architecture question regarding juju and maas (very green on those subjects). say I have a dozen servers under maas management, and I want to use juju for various app deployments. I've bootstrapped juju onto maas, and it took on an entire physical node
<ananke> now if I deploy any application stack with juju, each 'application' uses an entire physical node, which is a bit of an overkill of course. So my question is, what are some typical ways of combining maas+juju with virtualization, and would I be using juju to deploy them, or maas?
<ananke> maas seems to have an option to manage kvm virtualization, but juju store doesn't have anything for that. there are container solutions, but maybe I'm overlooking something obvious
<zeestrat> ananke: No, it's a bit of pain atm. Regarding the bootstrapped juju controller, folks usually create some kvm vm's on a dedicated machine or even on the MAAS controllers themselves and add them to MAAS so you have more machines to pick from. Then when you bootstrap juju, you can select those kvm vm's with a constraint or MAAS tag. Check out https://docs.maas.io/2.3/en/nodes-add#kvm-guest-nodes
<zeestrat> ananke: I believe using kvm vm's got even easier with MAAS if you use their pod/composable hw setup: https://docs.maas.io/2.3/en/nodes-comp-hw
<ananke> zeestrat: thank you. I may have to revisit the white board and figure out if maas+juju are still worth exploring. I was hoping maas+juju would help manage that middle layer
<zeestrat> ananke: Regarding deploying applications with juju, you can use the same kvm trick as with the bootstrapped juju controller but what most folk use and would be most appropriate is LXD containers. They are well supported and make it easy to deploy multiple applications on the same host.
<ananke> zeestrat: thanks, that makes sense. I haven't spent much time with lxd, but it's certainly something we'd be willing to learn
<zeestrat> No worries. At work, we dedicate 2 boxes for a bit of redundancy (can do on 1) for MAAS controllers and juju controllers with kvm. For the rest we use LXD.
<ananke> wonder if maas+openstack+juju combinations are common. seems there are guides on how to deploy openstack on maas, but not sure if those three can be glued together
<ananke> zeestrat: that's what I was planning on doing too, have a couple controllers per rack. I have two racks of equipment in two discrete facilities, dedicated mostly for testing various things. maas & juju seemed like a good fit, and they appear to be well polished
<zeestrat> ananke: That's exactly what we do. MAAS manages the hardware and juju deploys openstack on the hardware from MAAS.
<ananke> zeestrat: and then does juju communicate with openstack for deploying apps, or how is that done? I wonder if there are any blueprints for that setup
<zeestrat> ananke: Are you thinking of deploying openstack or deploying apps on top of openstack? Juju will do both, just good to know what you're looking for :)
<ananke> zeestrat: frankly, either or, whatever makes more sense. being able to deploy apps on top of openstack would be ideal
<hml> ananke: you can use juju to deploy openstack on your maas hardware, then turn around and have juju use the openstack to deploy applications.
<ananke> I'm coming from the traditional HPC clustering world, which has a very different approach to many things. we would like to create an on-prem cloud solution, first to serve our IT needs, second to potentially have a self-service cloud for our researchers
<zeestrat> ananke: I'd check out the openstack deployment guide if you're looking to deploy: https://docs.openstack.org/charm-deployment-guide/latest/. For more general info about the openstack juju charms (the recipes/playbooks used by juju) see: https://docs.openstack.org/charm-guide/latest/. For deploying apps on top of openstack see: https://jujucharms.com/docs/stable/help-openstack
<ananke> zeestrat: thanks, I was looking at the first document already earlier today, I'll check out the other one too. just trying to get my head wrapped around how all of this could function
<ananke> hml: ahh, that makes sense
<hml> for juju maas, openstack, lxd, aws, gce, azure are all cloudsâ¦ you can bootstrap with them then deploy
<ananke> I should redo the juju setup then, and have it use lxd on the maas controller, instead of an entire physical node
<hml> you can deploy units to lxd containers on machines already deployed
<hml> instead of new machines for each unit
<hml> check out the âto directive
<ananke> hml: right, that makes sense. however, I'm not familiar with lxd at all, and was hoping there was an existing solution with either maas or juju to manage lxd on those machines
<hml> ananke: you can try âjuju add-unit <application> âto lxd:0â
<hml> that would add a unit of something to a new lxd container on machine 0
<hml> there are many permutations of specifying where a new unit will goâ¦ check out juju help add-unit
<ananke> while I fumble through documentation, what's the relation of '0' in juju to maas? I imagine I would be able to query available systems from juju
<ananke> forgive the naive questions, I just started with juju this morning
<hml> not a problem  :-)
<hml> 0 is what juju defines as machine 0
<hml> if you run juju status 0
<hml> you can find which machine/isntance in your maas config is being used
<zeestrat> ananke: Just note that if you want to bootstrap juju on a MAAS controller, then your best bet is kvm as MAAS doesn't support lxd containers.
<ananke> zeestrat: thanks
<hml> zeestrat: you can still use lxd on a machine already deployed though yes?  just not use lxd as a maas machine?
<zeestrat> hml: Yes, just not as a MAAS machine which you need for bootstrapping juju.
<hml> lxd can be confusing, since we can use as a cloud, or to add containers to juju machines
<hml> used in multiple ways
<ananke> hmm, juju status 0 reveals nothing different than juju status <any number here>. https://paste.ofcode.org/yPdkLET3NNfTKjJwaHvRPq
<hml> :-)
<zeestrat> It's a bit confusing (it sure confused me in the start).
<hml> ananke: oops.. sorry  - you have to deploy a charm firstâ¦ :-)
<hml> juju deploy ubuntu âto lxd
<hml> will deploy a maas machine - then install ubuntu  charm on an lxd container on top of the machine
<ananke> ERROR unrecognized args: ["lxd"]
<hml> weirdâ¦ dash dash to?
<ananke> hmm. this is maas 2.3.0 with whatever latest stable juju was
<hml> my irc client is autocorrecting on me
<ananke> hml: ahh, some weird encoding took place. I retyped that, seems to be running now
<bdx> kwmonroe: happy friday
<ananke> looks like maas booted a system, and is installing ubuntu. yay
<kwmonroe> bdx: o/
<bdx> experiencing some issues with hadoop on openstack
<bdx> juju status | http://paste.ubuntu.com/p/DNdJprKXHH/
<bdx> kwmonroe: got a bug coming your way
<bdx> have you seen this before https://paste.ubuntu.com/p/TMCVJJb4jv/
<bdx> ?
<ananke> hml: thank you so much, this is helping me figure out how I can approach this stack
<bdx> when I look in /var/log/hadoop-hdfs/hadoop-hdfs-namenode-juju-6e95f5-hadoop-test-5.out
<hml> :-)
<bdx> https://paste.ubuntu.com/p/JvbskWgKSd/
<bdx> cat /var/log/hadoop-hdfs/hadoop-hdfs-namenode-juju-6e95f5-hadoop-test-5.out ^
<bdx> kwmonroe: is this telling me that xenial-cloud-img has different/more restrictive ulimit defaults that stop hdfs from starting?
<kwmonroe> bdx: i don't think so.. the .out is terrible.  see if there's more info in the .log (should be in the same dir)
<kwmonroe> bdx: if i had to guess, your NN can't resolve it's own address.  i know that is particularly painful in openstack envs
<kwmonroe> i mean, trying to deploy hadoop in openstack envs..
<bdx> Oh I bet
<kwmonroe> which is https://github.com/juju-solutions/jujubigdata/issues/62
<kwmonroe> which is ... still not fixed :/
<bdx> cat /var/log/hadoop-hdfs/hadoop-hdfs-namenode-juju-6e95f5-hadoop-test-5.log | http://paste.ubuntu.com/p/FWgwPc2Yxr/
<bdx> totally
<bdx> I wonder if I can hook up the designate bits and make my way
<bdx> ill be looking into this
<bdx> guess I wont be filing any bugs for you:)
<kwmonroe> oh there's plenty filed already.  dont you fret.
<bdx> Caused by: java.net.UnknownHostException: juju-6e95f5-hadoop-test-5: Name or service not known
<bdx> boom
<bdx> ok
<bdx> kwmonroe: its all about https://jujucharms.com/neutron-api/#charm-config-enable-ml2-dns
<bdx> they made it simpel for the internal resolution
<bdx> real nice
<bdx> designate looks like it s for external dns
<bdx> kwmonroe: I was able to get around it with that config
<kwmonroe> whoa - super cool bdx!  i kept hoping it would fix itself (or at least get easier).  turns out waiting pays off!
<bdx> +!
<bdx> +1 same
#juju 2020-04-06
<wallyworld> babbageclunk: easiest PR you will ever see, i promise https://github.com/juju/juju/pull/11408
<wallyworld> tlm: forgot to mention, can you look at this issue to see if it's related to the admission controller work? they've disabled rbac i think to work aroud https://github.com/ubuntu/microk8s/issues/1071#issuecomment-606501275
<tlm[m]> Ok
<tlm[m]> No nothing to do with admission, it is related to incorrect setup of RBAC from the charm by the look of things
<wallyworld> tlm: feel free to leave a comment with any insight to help unblock them as the current question is whether juju is at fault
<tlm[m]> Will do
<wallyworld> if not babbageclunk, then anyone else want to +1 a +0/-0 PR https://github.com/juju/juju/pull/11408
<babbageclunk> sorry, wasn't seeing notifications!
<babbageclunk> wallyworld: approved
<wallyworld> all good :-)
<babbageclunk> I don't think I've ever seen a +0/-0 pr before
* thumper[m] changed the topic of #juju to: All about Juju devops tool. Discourse at https://discourse.juju.is, project site https://juju.is
<achilleasa> can I get a CR on https://github.com/juju/juju/pull/11405?
<stickupkid> achilleasa, I've started on it, but these jumped out
<stickupkid> https://github.com/juju/juju/pull/11405#pullrequestreview-388045403
<achilleasa> stickupkid: what is that v2 bit?
<stickupkid> so even though they're addative you should still create a new struct for best practice
<achilleasa> ah, well, this is a new 2.8 API which we will freeze for the beta
<achilleasa> so there is no need to version additions/modifications atm
<stickupkid> ah ok, it fine
<stickupkid> I'll continue onwards
<achilleasa> I 'll copy this to the PR
<stickupkid> ta
<achilleasa> stickupkid: all that work for a meter status :D
<stickupkid> achilleasa, new question https://github.com/juju/juju/pull/11405/files/6614b1de953c3c527dd7eeb1777bd6496a2eb585#diff-02d6e400b72751b96ebe2ac7086886ee
<achilleasa> stickupkid: pushed a commit to address your comments
<stickupkid> manadart, achilleasa very quick CR - https://github.com/juju/juju/pull/11410 - whilst I'm looking at AWS manual deployments
<stickupkid> was requested that we expect the deployment to land on unit 0
<hml> achilleasa:  changes made,
<achilleasa> hml: I answered you question but I think my meter status changes caused a conflict with one of your tests
<achilleasa> s/you/your
<hml> achilleasa:  added a few nitâs on method comment headers.  looks like static analysis caught a few things.
<achilleasa> hml: yeah.... just force-pushed the version without lint checks; missed some mocks :-(
<achilleasa> hml: I will address the comments and re-push
<achilleasa> hml: do you think the rename makes things more consistent?
<hml> achilleasa:  yes, i was thining of it myself a few times.  with the addition of the other pieces to the doc, state became ambiguous.
<achilleasa> hml: so besides the model migration (juju/description) and your relation changes, what else do we need to complete the root card?
<achilleasa> there is a card about migrate steps (the TODO one for the operator?)
<achilleasa> anything else that we are missing?
<hml> achilleasa:  there is a card for migration.
<hml> achilleasa:  weâre missing ensuring the caas operator upgrades are correct.
<achilleasa> hml: https://trello.com/c/LEXkzAVm/2431-migration-steps-for-uniter-state-on-server is about the pending upgrade steps for the other state bits right?
<hml> achilleasa:  umm.. iâm reading migration.  :-). typo?
<stickupkid> I love petname command, it really does give so good names
<achilleasa> hml: ah... wasn't sure so I created another card :D
<stickupkid> s/so/some/
<stickupkid> lenient-pig
<achilleasa> stickupkid: in 11405 should I have bumped the MeterStatus facade version? I think I can get away with it because the controller will always be upgraded before the agents so nothing can call that API anyway, right?
<SpecialK|Canon> Hello! If I wanted to charm some software, is https://discourse.juju.is/t/charm-writing/1260 (from https://juju.is/docs/charm-bundles) the best place to start reading?
<achilleasa> hml: another small PR for you: https://github.com/juju/description/pull/75; also, do you have any further comments on 11411?
<hml> achilleasa:  are the changes made to 11411, then?  iâll take a 2nd look then
<achilleasa> hml: yes, it's green now
<hml> achilleasa:  is replated an autocorrectism?  https://github.com/juju/description/pull/75/files#diff-320e610bf9322fa08db897722b714520R12. not familar with that one in this context.  :-)
<achilleasa> hml: that's me typing while a paw patrol cartoon is playing in the other side of the room :D
<hml> hahahahah!
<achilleasa> hml: fixed ;-)
<stickupkid> achilleasa, i'd just bump it
<hml> achilleasa:  75 approved
<achilleasa> stickupkid: I know but this one is a bit odd in that it uses an apiserver/common bit as a mixin
<achilleasa> hml: tyvm
<stickupkid> achilleasa, we're talking about SetUnitStateArg right?
<achilleasa> stickupkid: no, this guy: https://github.com/juju/juju/pull/11405/files#diff-6023308f657e593715902695e8a0cea5R44
<stickupkid> achilleasa, ho?
<achilleasa> stickupkid: omw
<hml> achilleasa:  11411 is approved with a small nit.
<achilleasa> hml: argh... I hate typing doc comments. I will clear that up and land it
<hml> achilleasa:  hear ya!  :-)
<achilleasa> stickupkid: CR please: https://github.com/juju/juju/pull/11413
<achilleasa> hml: any idea why this refuses to become green? https://jenkins.juju.canonical.com/job/github-make-check-juju/4876/console
<hml> achilleasa:  still have red?  just back from lunch
<hml> achilleasa:  https://pastebin.ubuntu.com/p/3wSsZMSpgC/
<hml> achilleasa:  but that is weirdâ¦ unless something else landed in a few hours between runs.
<hml> achilleasa:  i need to rebase a 2/3 time
#juju 2020-04-07
<wallyworld> hpidcock: for whenever, here's that PR with the action data mutex added https://github.com/juju/juju/pull/11416. protects acess from the hook commands and the uniter itself during and after running an action
<hpidcock> wallyworld: approved
<wallyworld> ta
<thumper> do we have any focal deployment tests in CI yet?
<wallyworld> thumper: i know we added focal support to all the test charms back in late feb, not sure about what else
<hpidcock> wallyworld: we don't use any cgo specifics from the github.com/coreos/go-systemd package that I can see
<wallyworld> good
<hpidcock> I take that back
<wallyworld> no backsies
<hpidcock> github.com/juju/juju/service/systemd uses github.com/coreos/go-systemd/util.IsRunningSystemd
<hpidcock> oh phew
<hpidcock> that uses a file, ALL good!
<hpidcock> 10/10 no problems
<wallyworld> awesome
<hpidcock> https://www.irccloud.com/pastebin/IhJMfRZl/
<hpidcock> the snap is 125Mi
<hpidcock> so I think this is a winner
<hpidcock> bootstraps to lxd fine
<wallyworld> great, you checked the jujud binary on controller machine just to be sure?
<hpidcock> yep
<hpidcock> have now :)
<hpidcock> wallyworld: https://github.com/juju/juju/pull/11417
<wallyworld> looking
<hpidcock> or anyone else
<wallyworld> hpidcock: we can get rid of the x-dep plugin when we get to go mod :-)
<hpidcock> wallyworld: yeah I think its probably best we will have our fork and see if we can get some features merged back into the official plugin
<hpidcock> I've tried to keep the parameters the same where I can, but there are features they don't expose we might want to update in the upstream plugin
<wallyworld> true, although i'm not sure people care about that plugin anymore
<hpidcock> no I mean the go plugin not the dep one
<wallyworld> ah right
<timClicks[m]> https://discourse.juju.is/t/juju-progress-report-2020-w14/2882
<wallyworld> kelvinliu: small fixes for beta1 https://github.com/juju/juju/pull/11418
<kelvinliu> looking
<kelvinliu> lgtm ty
<wallyworld> hpidcock: those jenkins qa changes merged,i guess you are going to run jjb?
<hpidcock> I've been iterating, just wanted to get my fixes landed
<hpidcock> was banging my head for more than the minutes it should have taken
<wallyworld> no worries, just duble checking to ensure nothing slips by
<stickupkid> manadart, you got a sec?
<manadart> stickupkid: Yep.
<stickupkid> in daily
<manadart> achilleasa: Got a sec to look at https://github.com/juju/juju/pull/11421 ?
<achilleasa> manadart: looking
<achilleasa> manadart: approved
<manadart> achilleasa: Thanks.
<achilleasa> manadart: can you take a look at https://github.com/juju/juju/pull/11422?
<manadart> achilleasa: See how I go. I have to hit the road soon, but I'll look while my patch is landing.
<achilleasa> manadart: not in a hurry; got one more following up. If busy I can ask Heather to take a look when she gets online
<manadart> achilleasa: Approved it.
<rick_h_> stickupkid:  https://bugs.launchpad.net/juju/+bug/1730747 might be of interest to that reload-spaces work. I think we should be able to knock a few off the list
<mup> Bug #1730747: incomplete support for spaces in juju and maas <maas-provider> <spaces> <sts> <juju:Triaged> <https://launchpad.net/bugs/1730747>
<achilleasa> anyone up for a small review? https://github.com/juju/juju/pull/11423
<stickupkid> rick_h_, about 5-6 bugs in one :)
<rick_h_> stickupkid:  yea, exactly. Let's see if we can knock a couple out and maybe do individuals from there with tags for spaces and such for the future?
<stickupkid> achilleasa, can look into testing non-modified part.
<achilleasa> stickupkid: got a question about your PR comment
<stickupkid> achilleasa, the former. I want see that we don't call SetState
<achilleasa> stickupkid: but that's essentially testing the Modified method, right?
<stickupkid> achilleasa, well,
<achilleasa> no objection to adding another test; just trying to understand what value we get out of it
<stickupkid> achilleasa, well I find it ensures that we're changing functionality based on a state
<stickupkid> achilleasa, generally test that
<stickupkid> achilleasa, as apposed to testing every error case
<achilleasa> stickupkid: pushed a commit with the extra test; can you take a look?
<stickupkid> achilleasa, already ticked
<stickupkid> rick_h_, achilleasa can you delete the alpha space?
 * stickupkid hopes the answer is no
<achilleasa> stickupkid: my assumption is that it is read-only
<achilleasa> I mean, the instancepoller puts virtual IPs into alpha; if you deleted that were would they go if everything must belong to a space?
<achilleasa> (because VIPs don't match any of the CIDRs)
<stickupkid> achilleasa, yeah chaos indeed
<rick_h_> stickupkid:  nope
<stickupkid> wicked, this is great news
<rick_h_> stickupkid:  alpha space is the default, always there, home for things not assigned
<thumper> babbageclunk: are you still after a review for PR 11426?
<babbageclunk> thumper: yes please
<thumper> ok, I'll look now
<babbageclunk> ta
<thumper> babbageclunk: code generally looks good, just one request around behaviour when things don't go well
<babbageclunk> ok thanks
<babbageclunk> thumper: I don't know about that - *in theory* there could be another change to the uniter state file later that would need a different upgrade step to migrate - if this one goes nuclear and trashes the file for changes it doesn't know about that might be a bad thing?
<babbageclunk> thumper: definitely a theoretical problem though
<thumper> perhaps instead of using the validity function, perhaps just read the yaml, look for the remote-unit key, and if it is there add the remote-application one?
<thumper> that way we avoid the validity issue
<thumper> I think you are right on that aspect
<babbageclunk> ok, I'll tweak it to do that. Actually what's there now won't work in the scenario I'm describing, the Write call will fail if the state's invalid after a change.
<babbageclunk> So it needs to be done anyway
<babbageclunk> I'll leave the validity check on the way in, but make sure we can still write the change even if the resulting state isn't (yet) valid.
<thumper> yeah, I guess the initial validity check is fine, because if it is valid, there is nothing to do
<babbageclunk> cool
<thumper> I was just wondering how best to deal with the issues...
<thumper> but I think you are right, there is potential that a later release would cause an invalid file in a different way
<thumper> and that this upgrade step isn't there to fix that
<thumper> but it would be called first if they are upgrading through multiple versions
<babbageclunk> Maybe a post-upgrade-steps sanity check? Just reads the state, and if it's bad then throws it away?
<thumper> so I agree with you now that deleting it isn't the right thing to do
<thumper> no... I don't think so
<thumper> consider being invalid for two reasons?
<thumper> 2.7 added remote-application
<babbageclunk> I mean, after all of the steps are run
<thumper> but say 2.8.1 adds another key that is needed
<thumper> so I don't think we should throw away the change just because the result isn't yet valid
<thumper> because we can't guarantee that a future step wouldn't fix it
<babbageclunk> right, but there's a point where all the available upgrade steps have run, if the state's invalid at that point then maybe we need to throw it away
<thumper> but instead of reading and writing using a structure, I'd recommend just using a map[string]interface
<thumper> ah... I see what I think you are getting at
<thumper> we don't have anything like that yet...
<babbageclunk> yeah, thinking about it I should be reading into a map, just in case the current defn of State's changed so much that I can't load the file as is.
<babbageclunk> will change that too
<timClicks> is anyone with experience with reactive charms able to provide some advice here? https://discourse.juju.is/t/workload-stuck-in-maintenance-status/2890
#juju 2020-04-08
<tlm> kelvinliu: does the charm supply the name for a mutating webhook config ? https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.18/#mutatingwebhook-v1-admissionregistration-k8s-io
<tlm> don't mean the object meta name
<kelvinliu> checking
<kelvinliu> yes it does
<kelvinliu> charm provides .meta.name and .WebHooks[].Name  tlm
<tlm> awesome thanks
<kelvinliu> np
<babbageclunk> wallyworld: thumper's not around, can you approve https://github.com/juju/juju/pull/11426 instead? (I'm sure he wouldn't mind)
<wallyworld> ok, let me take a look
<wallyworld> babbageclunk: just a couple of small nits
<babbageclunk> awesome, thanks
<wallyworld> kelvinliu: +1 on PR but maybe discuss? have a read of the comments
<babbageclunk> wallyworld: alright, I'm going to hit merge on that even though thumper's not approved it
<wallyworld> he will get over it
<kelvinliu> wallyworld: I saw u mentioned more tests and to merge the hookChangeParams, HO?
<wallyworld> kelvinliu: sure, give me 1 mnute to hit send on an email
<kelvinliu> yep
<babbageclunk> hey _thumper_ can you approve https://github.com/juju/juju/pull/11426? I tried getting wallyworld to approve it and then smash it in because you weren't around (having made the changes we talked about), but it looks like github prevents that.
<_thumper_> babbageclunk: sure
<babbageclunk> thanks!
<_thumper_> I did leave a comment before
<_thumper_> but didn't think I  needed to approve as well
<_thumper_> but have done now
<babbageclunk> _thumper_: yeah, because you requested changes last time it enforces getting you to re-review
<_thumper_> yeah, sorry
<_thumper_> my "thumper" nick seems to be temporarily unavailable just now
<_thumper_> no idea why
<babbageclunk> he's been hacked!
<_thumper_> boo
<wallyworld> _thumper_:  glad this was caught. raised against 2.8-beta1 but affects 2.7.6 also https://github.com/juju/juju/pull/11427
 * _thumper_ looks
<_thumper_> wallyworld: approved, but you could simplify
<wallyworld> ok
<wallyworld> ty
<wallyworld> _thumper_: you can but that's not why i got the controller state
<wallyworld> i could have used either model or controller state to get config, but i had the controller one anyway
<wallyworld> the controller state is used to get the cloud service info
<_thumper_> ah, ok
<_thumper_> is that controller specific?
<wallyworld> not but here we are looking up the controller api addresses - the the controller cloud service mongo doc is what we need
<wallyworld> as oopposed to a cloud service doc for a workload app
<wallyworld> when we bootstrap, the controller get a k8s service mad efor it, like any other k8s app
<wallyworld> and that service is mapped to a cloud service doc in mongo
<wallyworld> and it records the address info
<tlm> quick PR for anyone around https://github.com/juju/juju/pull/11428
<babbageclunk> tlm: approved
<tlm> cheers
<wallyworld> manadart: would love a +1 on forward port of 2.7 when you're online https://github.com/juju/juju/pull/11429
<manadart> wallyworld: Looking now.
<wallyworld> ta
<thumper> wallyworld: you coming to this meeting too?
<kelvinliu> wallyworld: a tiny PR to add a helper func to caas layer for raw-k8s-set hook command, +1 plz https://github.com/juju-solutions/layer-caas-base/pull/26
<wallyworld> kelvinliu: ty, lgtm so long as it's been tested live
<kelvinliu> lol
<kelvinliu> could u just merge it plz ty
<wallyworld> kelvinliu: merged, but mke sure to test to ensure there's no regression
<kelvinliu> sure tested by copying to charm, I will be testing it again many times tmr
<wallyworld> all good :-)
<manadart> wallyworld: You still there?
<manadart> wallyworld: Nevermind; approved the merge patch.
<hpidcock> wallyworld: https://github.com/juju/juju/pull/11395 is ready for re-review with unit tests and requested changes. Ping me tomorrow/tonight/tuesday if there are any issues.
<stickupkid> hpidcock, JUST DON'T PING ME ON THE WEEKEND?
<hpidcock> hah
<wallyworld> hpidcock: ty, will likely look tomorrow
<stickupkid> manadart, thoughts https://github.com/juju/juju/pull/11425
<stickupkid> manadart, wallyworld that forward port of 2.7 won't land as there is a compile error
<wallyworld> ah balls, ok
<wallyworld> stickupkid: manadart: it's because in 2.8 there's no uniter state file anymore, so the method tp load it has gone away from the source code, and the 2.7.6 upgrade step fails to compile, so i'll need to resurrect that code for the upgrade step
<stickupkid> sick
<wallyworld> i'll do it tomorrow
<stickupkid> manadart, is this what you meant by endpoint bindings? Or are you talking about the charms endpoint bindings https://github.com/juju/juju/pull/11425
<manadart> stickupkid: What you have there is correct for now, but we also want to get st.AllEndpointBindings and continue if any of those are equal to the deletion candidate.
<stickupkid> haha, don't delete anything in reality
<stickupkid> manadart, also this is becoming somewhat a behemoth of a method, I may split it out
<hml> achilleasa:  https://github.com/juju/juju/pull/11414 is ready for review.  :-)
<achilleasa> hml: will take a look in a few min; I have almost managed to bootstrap with an operator from my dockerhub account
<hml> :-D
<hml> achilleasa:  looking at the unit test fails on 11414, looks like to do with no directory created.
<stickupkid> manadart, what was the issue with contraints validation not being correctly implemented
<manadart> stickupkid: It's naively correct for single network machines, but in order to handle multi-NIC, we need to compare units, not the application in aggregate.
<stickupkid> ah
<rick_h_> each unit lives on its own machine and guimaas is a good example that what spaces/etc are in play vary from machine to machine
<hml> achilleasa:  wallyworld reminded me you can do a juju dumpdb too if youâve enabled the develop feature flag at bootstrap.
<hml> instead of juju-db.bash
<hml> achilleasa: fixed fail in UniterSuite.TestUniterRelations,   i *think* thatâs the last one.
<achilleasa> hml: any ideas how to upgrade on k8s? I am using dev mode but not sure how to get it to pull the new operator
<hml> achilleasa:  ho?
<achilleasa> sure
<stickupkid> manadart, got a sec, I'm being dumb
<manadart> stickupkid: In Daily.
<hml> achilleasa:  weâre hitting https://bugs.launchpad.net/juju/+bug/1871496
<mup> Bug #1871496: Juju fails on deploy with error cloud service not found for microk8s <juju:Fix Released by wallyworld> <https://launchpad.net/bugs/1871496>
<achilleasa> hml: left comments in 11414. I stil need to go through the last set of files but that probably means a second pass in the morning. Take a look esp at my last comments regarding the state implementation and let me know what you think
<hml> achilleasa: having the relation data SetState all at once, is definately a goal, but would be an even bigger rewrite of this code, i think.    it sounds like polish to me?
<achilleasa> hml: note that my comment was not to do it all at once. just have an object to call State once, then give you a method to get the state for a relation by ID and to save the state of a relation. You still make a SetState for one relation but avoid the Ensure call
<achilleasa> the end result will be the same but the proposed way would likely involve less API calls and be a bit cleaner
<achilleasa> also, having to rewrite a large chunk of code while in beta wouldn't really be considered as polishing ;-)
<hml> achilleasa:  iâm not clear on why that avoids the Ensure call.  The ensure call is made at other times, but it looks like the primary need is for new relations, not saved ones.
<hml> achilleasa:  ho?
<achilleasa> hml: omw
<wallyworld> hml: you can do dump-db anytime if the feature flag is enabled, it's only a client side flag
<hml> wallyworld: ack
<hml> wallyworld: i referenced a sha for yo to fix the merge from 2.7 to develop
<wallyworld> great ok, will look after next meeting, ty
<babbageclunk> wallyworld: hml's change looks good to me
<wallyworld> yeah, was what i was going to do myself this morning, saved me the trouble :-)
<hml> wallyworld: it was blocking me for a bitâ¦ then other stuff came up after i came up with the fix.  :-)
#juju 2020-04-09
<babbageclunk> wallyworld: were there any conflicts backporting it?
<wallyworld> babbageclunk: yup, due to 2.8 supporting cancelling actions
<wallyworld> babbageclunk: were you still going to review the PR?
<thumper> wallyworld: does this bug look familiar? https://bugs.launchpad.net/juju/+bug/1828076
<mup> Bug #1828076: juju deploy fails because no machines can be found in MaaS in a given AZ <canonical-bootstack> <juju:Triaged> <https://launchpad.net/bugs/1828076>
<wallyworld> thumper: not really. there was an AZ issue in AWS that was fixed, but I am not aware of any recent maas fixes
<wallyworld> kelvinliu: tlm: got time for a HO?
<kelvinliu> wallyworld: yep
<tlm> yeah
<thumper> wallyworld: was it a general az fix or aws specific?
<wallyworld> thumper: i thought aws but can check
<wallyworld> thumper: i was wrong, it was generic code
<wallyworld> so the bug could well be fixed
<wallyworld> ws tested on aws as that's where the issue was reported
<kelvinliu> wallyworld: the hook failed because the leader was gone. the leader was gone because the map of pod <-> unit was lost.
<wallyworld> kelvinliu: otp, sorry
<kelvinliu> nw
<wallyworld> kelvinliu: yeah, so seems like a real issue to fix
<wallyworld> kelvinliu: since xtian is away, could you please review this backport whenever convenient? https://github.com/juju/juju/pull/11430
<kelvinliu> but I wasn't able to re-produce the pvc terminating error
<kelvinliu> looking now
<tlm> i am wondering if the pvc thing came from a storage increase ?
<babbageclunk> wallyworld: oops got halfway through before I had to go - doing it now!
<wallyworld> tlm: you are not allow to change storage config with juju once deployed. not to say there isn't a bug there somewhere
<babbageclunk> ha, it just updated that kelvinliu has, sorry
<wallyworld> babbageclunk: all good, i thought you were away for the arvo!
<tlm> ok just not sure where that 1g to 20g came from
<babbageclunk> no, back
<tlm> may have read the logs wrong
<wallyworld> tlm: we use 20G for the controller PV
<tlm> ah k
<wallyworld> 1G by default for workload
<kelvinliu> done
<wallyworld> tyvm
<kelvinliu> np
<wallyworld> i will tets on iaas to be sure we didn't break anything there
<wallyworld> kelvinliu: what file did the golang version selection for building debs live in?
<kelvinliu> wallyworld: https://bazaar.launchpad.net/~juju-qa/juju-release-tools/packaging-juju2-default/view/head:/debian/control#L8
<wallyworld> ty
<kelvinliu> np
<stickupkid> manadart, ping
<manadart> stickupkid: Pongzor.
<stickupkid> daily
<elox70> Watching from Sweden!
<elox70> I'have a question to the live show. Can I ask it here ?
<stickupkid> elox70, sure
<elox70> Cool! I'm also on youtube.
<elox70> Thanx for the great show!
<manadart> stickupkid: Simple one: https://github.com/juju/juju/pull/11432
<stickupkid> manadart, we've already got an IDSet
<stickupkid> manadart, probably needs renaming https://github.com/juju/juju/blob/develop/core/network/subnet.go#L171
<manadart> stickupkid: That's what I have renamed.
<stickupkid> ah
 * stickupkid is an idiot
<stickupkid> manadart, that's actually our dependency list of reload spaces https://paste.ubuntu.com/p/5PWFKnqjFp/
<manadart> stickupkid: Okay.
<manadart> stickupkid: Another precursor patch for my validation work: https://github.com/juju/juju/pull/11433
<hml> achilleasa:  got the last resolver test done - it wasnât testing what it thought it was testing.  now it does.  :-). iâll push the state manager change shortly
<manadart> achilleasa: Want to do QA on 11431? Code looks good, but I am running out of disk here and reset my mk8s.
<hml> achilleasa:  pushed, working on the state changes now
<achilleasa> manadart: ok, I will try it locally in about 5 min
<rick_h_> manadart:  stickupkid please let me know when the reviews/merge is ready for wallyworld's patch please.
<achilleasa> rick_h_: running QA steps for ^^
<rick_h_> cool ty achilleasa
<stickupkid> manadart, I've really picked one thread here, there is some really funky stuff here.
<manadart> stickupkid: That's the Juju way.
<stickupkid> essentially we "shim" state, but actually it does different things, it's like an adapter
<stickupkid> I might rename it to adapter and this it won't lie
<hml> achilleasa:  I think i have all of the changes made for the relation pr
<hml> achilleasa:  pondering the drive by to remove the panic.  wondering side effects: what might be left behind because of an error rather than a restart of the uniter caused by the panic.
<achilleasa> hml: I think if you bubble up the error the uniter will restart anyway. However, a panic (unless the dependency engine catches and suppresses panics) would likely cause the agent to be restarted, right?
<hml> achilleasa:  iâd like to be sure.  would rather not do in a drive by
<achilleasa> it's probably fine to leave as-is for now
<hml> achilleasa:  will look at in a separate pr
<achilleasa> however, we should probably get rid of them when we see them again in the future
<achilleasa> panics and long-running tasks are probably a bad combination ;-)
<achilleasa> rick_h_: QA for 11431 complete and PR has 2 ticks; we can prob go ahead and land it
<rick_h_> achilleasa:  awesome ty
<achilleasa> hml: reviewing 11414
<hml> achilleasa:  i have a few tests in state to clean up
<achilleasa> hml: got some questions about error handling; overall it looks great so far. Do you want me to get started on the QA steps?
<achilleasa> otherwise I will go back to my caas tests
<hml> achilleasa:  yes please on the QA steps, iâve added some more detail there
<achilleasa> hml: ok!
<hml> achilleasa:  reading questions
<achilleasa> hml: QA works as expected. Nice!
<hml> achilleasa:  sweet
<hml> achilleasa:  updating the error handling
<achilleasa> also, I think I figured out why my k8s upgrade was not picking up the fix. I will verify now...
<hml> achilleasa:  i have to clear any operator things from the local docker sometimesâ¦
<achilleasa> I think it does not use AlwaysPull so it never pulled the new image from dockerhub
<hml> achilleasa:  iâm thinking the qa should deal with some subordinates also, that has different code paths
<hml> achilleasa:  pushed up all my changesâ¦ including some error stuff.  i have a gremlin in mockSteps28Suite.TestMoveUnitAgentStateToControllerIAAS setup code that i have to nail, but wanted to get the rest up for yo
<achilleasa> hml: looking
<achilleasa> hml: error handling looks correct to me. Given that it's just a test that needs fixing do you want me to approve the PR now before my EOD?
<hml> achilleasa:  yes please!  and thank you for the indepth review on a large pr
<hml> playing whack a mole with setup directory creation.
<achilleasa> hml: done. After merging please run your QA steps again in conjunction with the ones from 11423 to make sure that both upgrades and model migrations work without issues
<hml> achilleasa:  rgr
<hml> achilleasa:  were are we with the juju upgrades for caas investigation?
<wallyworld> achilleasa: docs updated for k8s upgrade testing
#juju 2020-04-10
<pmatulis> simple charm question. the charm store has a postgresql charm but it doesn't say what version it's based on. i'm actually looking for both 9.x and 10.x
<roadmr> pmatulis: the charm is multi-version, check the "version" setting
<roadmr> (string) Version of PostgreSQL that we want to install. Supported versions are "9.3", "9.4", "9.5", "9.6" & "10". The default version for the deployed Ubuntu release is used when the version is unspecified.
<pmatulis> roadmr, oh wow thanks
<pmatulis> is there a special reason why there isn't an option for `remove-application` and `remove-unit` that conserves machines after having removed the only remaining units on them? some bare metal machines can take a very long time to provision
<rick_h_> pmatulis:  yea it depends on what series since it's using the deb released
<rick_h_> pmatulis:  because we don't destroy things on accident. You can "fake that" with the clean juju script included (typically more meant for cleaning on manual machines)
<rick_h_> pmatulis:  so going the manual add-machine route with the clean script might be faster/easier
 * rick_h_ is tring to find it
 * pmatulis has never heard of the clean script
<pmatulis> it would be nice to have these options though, if the user passes it nothing would be destroyed by accident?
<rick_h_> pmatulis:  also looks like the docs say --keep-instance will keep it around
<pmatulis> rick_h_, hm, i don't see that option
<rick_h_> pmatulis:  yea, seems docs are out of date there or that was wishful thinking. I'm trying to find that removal script. :/ I know it's there and added in 2.6 but can't find the note
<rick_h_> pmatulis:  https://github.com/juju/juju/pull/10877
<pmatulis> rick_h_, sweet thanks. still can't detect the --keep-instance option
<rick_h_> pmatulis:  right, that's only in the docs and might not have ever been there. Just came across it while searching for that script
<pmatulis> rick_h_, do you have a link to that option in the docs? i don't see it
<pmatulis> ha, i see it now
#juju 2020-04-11
<pmatulis> where does the value under the (juju status) AZ column come from if MAAS is not being used (where it would be the MAAS zone)? what kind of AZ is it?
<pmatulis> (i'm assuming a Nova AZ)
