#juju 2012-02-20
<danwee> hello, i need help with juju , i copy pasted the error, anyone can take a quick look please : http://paste.ubuntu.com/849764/
<danwee> help needed with juju
<danwee> anyone help with juju ?
<Leseb> danwee: what is it about?
<danwee> finally, can you check this out and tell me what you think : http://paste.ubuntu.com/849764/
<danwee> Leseb: can you check this out and tell me what you think : http://paste.ubuntu.com/849764/
<Daviey> Gah, nobody charmed openvpn yet?
<hazmat> hi danwee have you been able to verify that the machine is actually up?
<SpamapS> danwee: after this long, perhaps you should send an email to the mailing list?
<SpamapS> danwee: clearly none of our schedules matches up with yours ;)
#juju 2012-02-21
<TeTeT> hi, I'm following http://askubuntu.com/questions/65359/how-do-i-configure-juju-for-local-usage and I run into a problem at bootstrap already. the virtual net cannot be set up
<TeTeT> http://pastebin.ubuntu.com/851168/ http://pastebin.ubuntu.com/851170/
<TeTeT> I try this in a vm, maybe the network setup does not work with the vm's network emulation?
<koolhead17> TeTeT: i tried it on a physical machine :P
<TeTeT> koolhead17: hmm
<SpamapS> TeTeT: what release of juju? (apt-cache policy juju)
<TeTeT> SpamapS: Installed: 0.5+bzr457-0ubuntu1 from preicse
<TeTeT> SpamapS: is it supposed to work on a vm managed through libvirt and using the default virbr0 connected interface? Or does it need a real bridge to the ethernet card of the system?
<SpamapS> TeTeT: it should work fine inside a VM yes
<TeTeT> SpamapS: sudo virsh net-start default just does not work, maybe because of vm inside of a vm constraint?
<SpamapS> TeTeT: no, the networking part of virsh is pretty simple
<SpamapS> TeTeT: those "virbr" bridges are just bridges with no physical components.
<TeTeT> SpamapS: ok. any idea how to get this working?
<SpamapS> TeTeT: whats the problem with virsh net-start default?
<TeTeT> SpamapS: error: internal error Network is already in use by interface eth0
<benji> I'm having a problem running on EC2; the bootstrap appears to work then I run juju status, lie to ssh about verifying the fingerprint and then juju status hangs, never to return
<SpamapS> TeTeT: sounds like you have it configured to be a "real" bridge instead of a virtual one.
<SpamapS> benji: does 'juju ssh 0' work?
 * benji tries
<benji> SpamapS: nope, I get "2012-02-21 10:20:05,876 INFO Connecting to environment..." and then a hang
<TeTeT> SpamapS: doubt it, the vm has the normal 192.168.122.x address for virbr0 on the host
<SpamapS> benji: interesting...
<benji> SpamapS: this strace output looks like it might mean something to someone that knows what's going on: http://paste.ubuntu.com/851476/
<SpamapS> benji: hmmm.. you are running with default-series: precise ?
<benji> SpamapS: yep
<SpamapS> benji: have to go afk for a bit
<SpamapS> benji: try oneiric.. if that works.. we have a bug in precise
<benji> will do
<benji> SpamapS: it works fine with oneiric
<SpamapS> benji: ok, do you want to open a bug report against juju? I can do it too if you don't want to.
<benji> SpamapS: I can
<SpamapS> benji: *thank you*!
 * hazmat upgrades to precise
<hazmat> TeTeT, did you already have a bridge setup? it looks like its just the default libvirt bridge there
<TeTeT> hazmat: nah, I didn't setup an extra bridge.
<TeTeT> hazmat: oh, the reboot didn't change anything for me
<TeTeT> hazmat: which is mentioned in the askubuntu article
<_mup_> Bug #937889 was filed: Hang on "juju status" with EC2 and Precise <juju:New> < https://launchpad.net/bugs/937889 >
<m_3> benji SpamapS: I had problems with precise juju instances at the end of last week too... it couldn't install packages b/c of dep problems (txzookeeper iirc)
<m_3> didn't get a chance to really debug it tho, but it's easily reproducable with just a bootstrap
<m_3> (ec2)
<jcastro> SpamapS: m_3: did you guys get a mail from wordpress for the server blog?
<benji> SpamapS: https://bugs.launchpad.net/juju/+bug/937889
<_mup_> Bug #937889: Hang on "juju status" with EC2 and Precise <juju:New> < https://launchpad.net/bugs/937889 >
<m_3> jcastro: looking now
<m_3> benji: thanks, I'll +1 it
<jcastro> m_3: oh so it worked? awesome, didn't know if it was set up to send mail, nice work!
<m_3> jcastro: don't see it
<m_3> easiest is probably postfix/gmail to send... I'd bet any direct sends'll be already blacklisted
 * jcastro nods
<jcastro> our first issue, mail!
<m_3> jcastro: I've got that config snapshotted somewhere if you want me to dig
<jcastro> this will be fun
<m_3> jcastro: yup!
<jcastro> m_3: hey so, you think you can charm up lp:summit this week?
<jcastro> it'd be a nice win
<m_3> jcastro: lemme look at it... I'm buried in stuff atm, but it wasn't too bad iirc
<SpamapS> jcastro: still powering through my email
<jcastro> SpamapS: it's ok it probably didn't send
<robbiew> jcastro: you got blog access now
<jcastro> yep, thanks
<jcastro> man, wordpress needs FTP to import blogs
<jcastro> m_3: everytime we run into a problem I'm going to say "octo"
<m_3> jcastro: and I'll be happy to "+1
<m_3> " that
<m_3> jcastro: zk charm reviewed and ready to promulgate
 * m_3 rings the promulgate bell
<jcastro> oooh cute!
 * jcastro will blog that one
<jcastro> anything special I should care about, or is it in the readme?
<m_3> jcastro: readme's on the way still... it's almost eod for jamespage too so prob won't happen until tomorrow
<jcastro> ok no worries
<jamespage> \o/
 * jamespage is going to charm hbase tomorrow
<jamespage> m_3: I was thinking of using dotdee to manage the config for hbase - have you seen any use of it in charms so far?
<m_3> jamespage: have not
<m_3> I think that's a great idea
<jamespage> m_3: it would mean that you could generate the part of the config file associated with the event without having knowledge of the rest of the configuration
<m_3> some charms have used a combo of config snippets and sed... but dotdee would be a little less manual
<m_3> I've been experimenting with calling cheetah from the command line... (exporting vars into the env)... but it's... eh
<SpamapS> sed is a fail IMO
<SpamapS> we should be building config files from scratch or using dotdee
<m_3> sed can actually be a simpler/cleaner solution for small config changes
<m_3> and easier to maintain b/c it's just pure diffs
<m_3> but yes, for large-scale or complex config... it sucks
<jcastro> m_3: hey the IP for that blog instance won't change will it?
<m_3> yeah, it totally will... lemme attach an elastic IP real quick... hang on
<jcastro> thanks
<jamespage> m_3: I did some cheetah stuff for tomcat7 I think
<m_3> jcastro: ok, 23.21.249.196... can you point the server blog url directly to that IP addr?  I can add a url like xxx.markmims.com in front of it if we need to
<jcastro> ok
<jcastro> hey should juju ssh blog/0 work?
<SpamapS> hazmat: is that reboot support I see landing in lp:juju ?!
<jcastro> m_3: hey so does assigning that IP break the existing aws URL?
<m_3> jcastro: you might need to specify environment... juju ssh -efido blog/0
<m_3> jcastro: checking on everything now
<hazmat> SpamapS, yup, and upstartification, and all kinds of yummy goodness
<jcastro> m_3: I get a connection timed out error
<m_3> jcastro: wow... it looks like aws just removed the old dns entry
<jcastro> and the ec2 url just stopped working for me altogether
<m_3> gave it a new one that matches the elastic ip
<m_3> ec2-23-21-249-196.compute-1.amazonaws.com
<jcastro> hah, awesome
<m_3> but now it seems like juju's lost
<jcastro> yep, I was going to ask, did we just find a new bug?
<m_3> I've done this before without this problem... lemme poke around and see what happened
<jcastro> And we're off to a great start!
<m_3> in fact, it's up without this problem in another environment
<jcastro> m_3: save your history, this will be a good post.
<m_3> nice, now juju status reports different addresses for the service unit and the machine
<m_3> it's trying to ssh to the old one that the service unit shows (and not getting through)
<jcastro> yep
<m_3> jcastro: you can still get in using the explicit machine id... 'juju ssh -efido 2'
<jcastro> ah!
<jcastro> jamespage: thanks for the promulgation!
<jamespage> jcastro, no problemo!
<_mup_> Bug #937949 was filed: juju status shows addresses that are out of sync <juju:New> < https://launchpad.net/bugs/937949 >
<m_3> jcastro: ^^
<mars> Hi guys, I have a question about installing from Launchpad private PPAs: is there a good charm cookbook recipe for doing this?
<mars> This looks good for a start: http://charms.kapilt.com/~openstack-ubuntu-testing/precise/nova-volume/hooks/nova-volume-common
<m_3> mars: https://code.launchpad.net/~james-page/charms/precise/zookeeper/trunk is another example
<m_3> it's pretty straightforward once you have packages _in_ the ppa
<m_3> the charm's install hook just add-apt-repository and then apt-get update, then apt-get install
<m_3> getting packages into the ppa is a whole other ballgame :)
<mars> m_3, thanks, that is a nice way to do public archives
<mars> Private ones require a bit more work, what with the custom URL and all.  AFAIK add-apt-repository doesn't handle them
<mars> The two recipes can probably be merged to what I want: install from a private PPA, public PPA, or archive.
<m_3> mars: ah, gotcha... sorry
<m_3> yeah, any ppas outside of launchpad seems like the tough thing with that would be key exchange
<m_3> but I really don't know
<m_3> we try to make sure any downloaded payloads can be cryptographically verified... for the charms in the charm store
<m_3> for other charms, anything goes... pulling from github for node.js, npm, gems, etc is pretty common
<SpamapS> m_3: whats the txzookeeper problem exactly?
<m_3> SpamapS: boostrap a precise environment... ppa or distro doesn't matter
<m_3> then ssh directly to the instance (juju ssh hangs)
<m_3> dig through the log... packages aren't installed b/c of a dep problem (I vaguely remember it being txzk...)
<m_3> I can reproduce and dig out the logs if you want, just lemme know
<m_3> http://paste.ubuntu.com/851637/
<m_3> SpamapS: ^^
<m_3> SpamapS: added your key to ubuntu@ec2-174-129-55-132.compute-1.amazonaws.com
<SpamapS> m_3: that looks like just plain broken images
<m_3> SpamapS: it's strangely similar to my desktop problem atm: http://paste.ubuntu.com/851649/
<SpamapS> m_3: yours looks like an out of sync mirror..
<m_3> oh, nice... /me fixing that!
<m_3> very scared I have to reinstall
<SpamapS> m_3: definitely not
<SpamapS> m_3: just point at us.archive.ubuntu.com
<jcastro> SpamapS: m_3: incoming mail about a charm contest, please review by end o business today, so I can launch this badboy
<m_3> jcastro: roger roger
<m_3> SpamapS: yeah, taking my apt-cacher-ng out of the picture
<m_3> SpamapS: no change
<SpamapS> m_3: have you tried dist-upgrade ?
<SpamapS> m_3: or apt-get -f install ?
<m_3> yup, same
<SpamapS> m_3: that does not make any sense.
<m_3> variations of -f or --fix-xxx didn't seem to do much
<SpamapS> m_3: you have something damanged.. what version of gnome-control-center?
<m_3> I hadn't been using any mirrors.... other than apt-cacher-ng
<m_3> gnome-control-center 1:3.2.2-2ubuntu8
<SpamapS> m_3: 1:3.3.5-0ubuntu2 is the latest. It should be installing that
<m_3> it was an upgrade from oneiric and not a fresh install
<SpamapS> mine too
<SpamapS> mine goes back to 10.10 :)
<m_3> the juju one is more important though
<SpamapS> yes I'm looking into the juju one
<SpamapS> I think thats just broken images
<m_3> jcastro: comments in on the charm contest
<SpamapS> hazmat: https://code.launchpad.net/~clint-fewbar/juju/use-packages-yaml/+merge/94040
<SpamapS> hazmat: fixes running juju w/ precise instances
<SpamapS> hazmat: oddly enough.. I used lbox propose, but I don't see it talking to rietveld. :-P
 * SpamapS lunches
<hazmat> SpamapS, it needs a -cr flag to make it go to reitveld
<SpamapS> ah, next time.. its trivial anyway
<SpamapS> benji: hey we found a resolution for the bug you reported this morning
<SpamapS> err
<SpamapS> I should say, a few hours ago.. might not have been morning
<benji> It was morning somewhere.
<SpamapS> benji: thanks for trying out precise. :) The problem was that libc6 was updated between alpha1 and now, so debconf was prompting for some questions
<SpamapS> And the real underlying problem was that we were doing apt-get in runcmd instead of listing the package in cloud-init's 'packages' line
<m_3> SpamapS: did you merge and kick off a ppa build?
<benji> SpamapS: interesting; good turn-around time on the fix
<m_3> SpamapS: nevermind... I see
<hazmat> SpamapS, that seems a little strange only because by default will use nightlies builds on ec2, i suppose that's not the case for openstack
<SpamapS> hazmat: no, we use released builds by default on ec2
<SpamapS> def get_current_ami(ubuntu_release="oneiric", architecture="i386", persistent_storage=True, region="us-east-1", daily=False, desktop=False, url_fetch=None):
<SpamapS>     data["version"] = daily and "daily" or "released"
<hazmat> ugh.
<SpamapS> hazmat: thats the more conservative approach
<SpamapS> hazmat: trouble is, there's no way to override it
<SpamapS> though I suspect that will come with the full implementation of constraints
<hazmat> SpamapS, we should be using the nightlies, we tell cloud-init to do an update/upgrade
<hazmat> so we're just wasting bandwidth
<hazmat> and it also means we get newer kernels
<SpamapS> hazmat: true!
<SpamapS> though
<SpamapS> less repeatable deploys with that strategy
<SpamapS> can't tell you how many times I had version skews drive me *NUTS* in tracking down problems at 3am
<hazmat> SpamapS, is the reality any different if we're doing an upgrade on the machine prior to setting up juju?
<SpamapS> hazmat: I'm suggesting that the upgrade is also a problem
<SpamapS> hazmat: I think ultimately we just need ways to say "fire all install hooks" on a service so that you can re-assert versions
 * hazmat nods
<hazmat> version skew pincer movement, wiped out entire roman legions, didn't even need the elephants at cannae
<SpamapS> lol
<hazmat> yeah.. i can see it both ways
<hazmat> if your reproducing or adding units to the service, you definitely want the same versions, if your deploying fresh, i'd see wanting to have the latest stable-updates applied
<hazmat> as a goal
<hazmat> SpamapS, sounds worth some more discussion on list to poll a larger audience
<SpamapS> hazmat: perhaps add-unit should not do the update/upgrade
<SpamapS> hazmat: another thought is to defer updates/upgrades to charms always
<hazmat> SpamapS, but that could be at a different version delta than the original unit
<hazmat> SpamapS, ie. if their off releases, and the first unit did the update/upgrade, than the second unit is stuck at the base image version.. doesn't really make sense
<SpamapS> hazmat: right, so really, perhaps update/upgrade should be pushed off to charms.
<hazmat> SpamapS, i think its probably more of use to use the nightly unless its an add-unit in which case we use the previously used image, and we don't update/upgrade.. BUT.. there are lots of management tools and probably colo services that might also bbe doing package management
<SpamapS> hazmat: *or* disconnected from deploy/add-unit
<SpamapS> hazmat: I think we should just document how it works now, and think about how to improve the "update the whole service" story
<hazmat> SpamapS, that's fair, although i think it could use some discussion on the wider list as well to help advance the story
<SpamapS> hazmat: I asked earlier.. but saw no response.. did I see rebooting landing in lp:juju ?
<jimbaker> SpamapS, hazmat replied at 10:04 MST: SpamapS, yup, and upstartification, and all kinds of yummy goodness
<jimbaker> nice to see those features land, as i mentioned in #juju-dev at the time :)
<SpamapS> ahh ok cool!
<SpamapS> Thats like.. huge
<jimbaker> SpamapS, indeed!
<SpamapS> https://bugs.launchpad.net/juju/+bug/863526
<_mup_> Bug #863526: Juju agents do not handle reboots <production> <juju:Triaged> < https://launchpad.net/bugs/863526 >
<SpamapS> So, thats kind of a meta-bug, but does a reboot work now?
 * SpamapS tries it
<hazmat> SpamapS, you can kill any juju agent and it should do the right thing
<hazmat> mad props to fwereade_ who did the heavy lifting
<hazmat> which reminds me, i should circle back to the branches i had waiting on that
 * SpamapS bootstraps and then reboots to see what happens
<hazmat> SpamapS, that might not work ;-)
<hazmat> SpamapS, its more that agents can be restarted, and killed at arbitrary points and do the right thing
 * SpamapS would really like to close some 'production' bugs
<SpamapS> hazmat: any reason that might not work?
<hazmat> SpamapS, the session expiration stuff hasn't landed, as it was waiting on the restart work as a mechanism
<SpamapS> so if the reboot doesn't happen in like, 3 seconds, something bad happens?
<hazmat> SpamapS, if the zk server advances the clock and expires the session, the agents might not handle that.. i dunno there is some support there for killing old sessions when the process comes backs .. so it might work
<hazmat> but if the agent is alive when the session expiration happens then they don't do anything for it
<SpamapS> hazmat: well in this case, I'm rebooting zookeeper too...
<hazmat> SpamapS, right, but zk will effectively advance the clock on all extant sessions when it comes back up... like i said it might work, i just can't guarantee it yet
<SpamapS> hazmat: I don't understand what advance the clock means, and I don't understand why an expired session does anything. :-P
<SpamapS> doesn't the agent just start a new session?
<hazmat> SpamapS, it will when it starts up, but not if the session is expired while its up
<SpamapS> Actually the agents are failing on start
<SpamapS> http://paste.ubuntu.com/851980/
<SpamapS> juju.errors.JujuError: No session file specified
<hazmat> SpamapS, hmm. they should all have session files specified if the env was started with the latest trunk
<SpamapS>     JUJU_ZOOKEEPER=localhost:2181 python -m juju.agents.machine -n --logfile=/var/log/juju/machine-agent.log
<SpamapS>     --pidfile=/var/run/juju/machine-agent.pid', 'JUJU_ZOOKEEPER=localhost:2181 python
<SpamapS> bootstrap seems to not use the upstart yet?
<hazmat> SpamapS, quite possible, its a large set of changes that just landed on trunk.. i'll do some additional qa testing now
<SpamapS> ./juju/providers/common/tests/data/cloud_init_branch_trunk:runcmd: [sudo apt-get install -y python-txzookeeper, sudo mkdir -p /usr/lib/juju,
<SpamapS> DOH
<SpamapS> missed some apt-get's
<hazmat> SpamapS, you want the cake or egg  treatment ;-)
<hazmat> jk
<SpamapS> hazmat: the egg goes on my face.. and the cake.. well
<SpamapS> THE CAKE IS A LIE
<hazmat> but so much tastier
<SpamapS> oh
<SpamapS> hahaha
<SpamapS> hazmat: stand down
<SpamapS> my local version of juju was the distro version
<SpamapS> *DUH*
<hazmat> cool, i do remember testing that earlier (killing machine agent), it looks ok locally.. still not 100% sure about the restart capability
<SpamapS> hazmat: but there are branches that will solve that in flight?
<hazmat> SpamapS, yes
<hazmat> heading out to check out a user group meeting, g'night
<SpamapS> The system is going down for reboot NOW!
 * SpamapS crosses fingers
<SpamapS> hazmat: have fun
<SpamapS> initially it looks to have worked quite nicely
<SpamapS> Heh.. it helps that we reach runlevel 2 at 7 seconds.
<SpamapS> 2012-02-21 23:25:19,466:2636(0xb73896c0):ZOO_INFO@zookeeper_close@2304: Closing zookeeper sessionId=0x135a235ffb30001 to [127.0.0.1:2181]
<SpamapS> 2012-02-21 23:25:47,188:576(0xb74ed6c0):ZOO_INFO@log_env@658: Client environment:zookeeper.version=zookeeper C client 3.3.3
<SpamapS> looks like it worked fine for machine/provisioning agent
<SpamapS> sweet... and they can be restarted with 'service juju-machine-agent restart'
<SpamapS> I just noticed
<SpamapS> am I not being asked to verify ssh keys now?!
<SpamapS> is that landed?
<SpamapS> because if it is.. woot!
<SpamapS> if I"m just dumb and fixed my .ssh/config to not be asked.. then ignore moe
<SpamapS> ignore me too.. but really ignore moe.. that jaerk
#juju 2012-02-22
<SpamapS> interesting..
<SpamapS> config-changed fires when the unit agent is restarted
<benji> config-changed used to fire twice on deploy, I don't know if that is intentional or not, but I thought it was a little odd
<benji> /ignore moe
<hazmat> SpamapS, its not landed
<hazmat> benji, config-changed is meant to be idempotent.. its fired once always before the service is started, just so the app can pull config and put in place.. and then it will be fired again if the config is changed
<_mup_> juju/symlink-guard r455 committed by kapil.thangavelu@canonical.com
<_mup_> merge trunk
<_mup_> juju/symlink-guard r456 committed by kapil.thangavelu@canonical.com
<_mup_> address review comments, extract symlink constant to charm base
<_mup_> juju/symlink-guard r457 committed by kapil.thangavelu@canonical.com
<_mup_> clean up error message
<_mup_> juju/trunk r465 committed by kapil.thangavelu@canonical.com
<_mup_> merge symlink-guard verify that charm symlinks don't extend outside of the charm and verify file types being included, internal symlinks are ok. [r=bcsaller,jimbaker][f=928348]
<_mup_> juju/enhanced-relation-support r6 committed by jim.baker@canonical.com
<_mup_> Sketch of impl details
<dazz> Guys, can you scale (Add units) to mysql?
<SpamapS> dazz: right now that just creates two independent mysql servers..
<SpamapS> dazz: that would be useful in some cases.. such as if you are sharding/partitioning
<dazz> which is fine, is it clustered still?
<dazz> automatically? That's how I thought it would have been in my head.
<SpamapS> dazz: its not "clustered" per se
<SpamapS> dazz: there's probably a way to make them scale out in some kind of ring replication.. but not without some thought.
<SpamapS> dazz: it would be cool to add something like galera replication to the charm and just have it automatically scale out that way
<dazz> np cheers
<SpamapS> dazz: just curious.. how much experience do you have with scaling mysql?
<dazz> little bit
<dazz> nothing huge though
<dazz> One of the "awesome" things I was looking at juju for was the automatic scaling of mysql.
<dazz> automatically cluster them as you scale etc.
<SpamapS> dazz: I think we could put together a few different config options that would allow one to say something like 'deploy mysql --set scaling-mode=ring|galera|standalone' ..
<dazz> I assume really, these would be custom mysql charms for this ability.
<dazz> that would be awesome
<SpamapS> dazz: no not custom.. it should be able to be encoded in the single mysql charm
<dazz> yeah
<SpamapS> dazz: juju allows customization very easily, but it encourages making things configurable
<dazz> right but the current mysql charm doesn't do this right?
<SpamapS> dazz: right. It only supports one-way replication
<dazz> yeap
<dazz> so if I were to extend the way the charm currently is, you'd like it ;)
<SpamapS> dazz: part of the reason for that is that your application needs to understand how to use a mysql cluster, and there really aren't any opensource apps that do.
<dazz> hrmm
<SpamapS> mediawiki does know how to use readonly slaves..
<SpamapS> which is why the mysql charm supports readonly slaves. :)
<dazz> wonder if we can do some sort of load balancer for mysql....
<SpamapS> dazz: sure, but without a synchronous replication technology like galera, you can't guarantee reads from one will be consistent with writes from another.
<adam_g> SpamapS: still up?
<SpamapS> adam_g: yeah, just poking at things. Whats up?
<adam_g> SpamapS: you mentioned there was some recent juju breakage in precise? is that still an issue? some issue hit all the CI nodes at once tonight, something ive not seen before
<adam_g> agent startup fails with: juju.errors.JujuError: No session file specified
<SpamapS> adam_g: the issue was that in the cloud-init blocks it was doing 'apt-get -y install', which, if it pulls in something that asks a debconf question, would be a fail.
<SpamapS> adam_g: the problem is that you are trying to spawn juju units with an older version of juju I think
<SpamapS> adam_g: do you tear-down the bootstrap every time?
<adam_g> SpamapS: no.. im waiting on a re-bootstrap now, im sure it'll resolve
<SpamapS> adam_g: its possible the change made today had some unintended consequences... when I saw that error, I had used an older build to bootstrap with the newer stuff in the ppa and it failed
<adam_g> yeah, im running running with juju-origin: ppa, i think the bootstrap node was up for a week or something stupid
<adam_g> i need to write a nightly job to teardown and rebootstrap to avoid that
<SpamapS> adam_g: the latest version actually allows you to *reboot*
<SpamapS> adam_g: we might even be able to upgrade juju on a box.. :)
<adam_g> SpamapS: as in, upstart jobs?
<SpamapS> aye
<adam_g> suhweet
<SpamapS> adam_g: apparently there are still some cases where the restart will fail.. something with zookeeper and clocks and sessions
<SpamapS> but this is still super sweet, as we can in theory have a long running bootstrap node that we can upgrade
<SpamapS> adam_g: yeah I think the issue you're seeing is that the newer agents require a session file argument..
<SpamapS> so we basically just broke *every* running bootstrapped environment
<adam_g> heh
<SpamapS> unless people manually update juju on their machine 0, and then manually restart the provisioning agent
 * SpamapS files a bug
<SpamapS> I really think juju bootstrap needs to *cache* the version of juju it installs/deploys, and deploy that one rather than whatever is in juju-origin
<adam_g> just run squid alongside zookeeper, and setup repository access thru it via cloud-init on nodes, right? :)
<adam_g> okay been in front of this thing since 8am. startin to feel like Daviey. g'night
<SpamapS> adam_g: me too.. time to sleep. :)
<_mup_> Bug #938463 was filed: New agent requirement for session file breaks all existing bootstrapped envs using juju-origin PPA <juju:New> < https://launchpad.net/bugs/938463 >
<adam_g> SpamapS: note, still getting there same error, this time on the bootstrap node:http://paste.ubuntu.com/852331/
<SpamapS> adam_g: make sure whatever ran 'juju bootstrap' is also the latest version
<SpamapS> Seems like that session file option could have a default to prevent this.
<adam_g> SpamapS: is there something i need to pass as session file?
<SpamapS> adam_g: no
<SpamapS> adam_g: you just need the latest juju from the PPA
<adam_g> ok
<adam_g> upgraded client node as well.
<adam_g> thanks
<SpamapS> adam_g: note that 'distro' would probably be a more stable option. :)
<_mup_> Bug #938521 was filed: constraints spec contains historical information <juju:New> < https://launchpad.net/bugs/938521 >
<Leseb> Is anyone already setup juju with orchestra only in localhost environment? (not EC2 instance). There are so much bug in there :(
<niemeyer> Good morning!
<bac> hi niemeyer, yesterday i posted to juju list from 'brad.crittenden@canonical' instead of 'bac@canonical'.  could you approve the message and check the settings to allow the alternate address in the future?  sorry for the bother.
<niemeyer> bac: Definitely, sorry for the trouble
<niemeyer> bac: Done, also whitelisted it
<bac> niemeyer: thanks!
<niemeyer> bac: np
<xerxas> Hi all
<xerxas> is juju useable from mac ?
<xerxas> I mean , the machine i type command on
<xerxas> as juju is python , I think it would easly work
<xerxas> but I haven't looked at the code
<xerxas> I'm just asking in case you already know it does run or doesn't run
<xerxas> or maybe someone did it
<benji> hi all, I'm having an issue similar to the one I had yesterday: when bootstrapping in EC2 on oneiric "juju status" hangs
<benji> hey! it's working.  I must have just needed more patience.
<benji> how does one deploy a service on a large EC2 instance?
<benji> I'm trying to just change default-instance-type and default-image-id without any luck.
<m_3> benji: I use: 'default-instance-type: m1.large' and 'default-image-id: ami-6fa27506'
<m_3> benji: that image might not be ebs-backed... I usually prefer ephemeral
<m_3> benji: that's oneiric btw
<hazmat> xerxas, yes the client is mac compatible, if its not its a bug, we do rely on one main extension the zookeeper bindings
<hazmat> xerxas, it is packaged in systems like homebrew or darwinports i believe
<xerxas> ok
<xerxas> ahh
<xerxas> didn't saw, sorry
<xerxas> doesn't seem to be in homebrew
<xerxas> hazmat:  but I'll look
<xerxas> thanks
<hazmat> xerxas, ? https://github.com/mxcl/homebrew/blob/master/Library/Formula/zookeeper.rb
<xerxas> ahh , you mean zookeeper
<xerxas> but zookeeper is installed on the "controller" host
<hazmat> xerxas, it is but we need the bindings for the client to talk to the controller host
<xerxas> I mean , juju workflow is : my workstation <=> an (EC2 let's say) Server that has zookeper and controlling the infrastructures <=> servers for charms
<xerxas> ok
<xerxas> then I don't need zookeeper, I just need to python zookeeper client , am I wrong ?
<hazmat> xerxas, that's right
<hazmat> xerxas,  this is a homebrew recipe attempt for juju ... http://repo.chenetz.net/juju.rb
<hazmat> xerxas, but really since its python, you can just grab a checkout and run it, if you have the zk binding (+ python-yaml)
<xerxas> yeah
<xerxas> sure
<xerxas> thanks
<xerxas> I was just asking here before doing something stupid (If you know it doesn't work , it would have save me some time ;) )
<xerxas> hazmat:  thanks, I'll try it
<xerxas> then
<jcastro> m_3: hey did you see this wrappz work in lp:charms?
<jcastro> SpamapS: fill this in for me:
<jcastro> SpamapS: we prefer charms be licensed under the ... X ...
<jcastro> solve for X!
<jcastro> or do we care as long as it's OSI?
<SpamapS> jcastro: GPLv3
<SpamapS> jcastro: but yeah, thats just what we prefer.. anything that is "Free" works. :)
<koolhead17> SpamapS: the bug is still not fixed after the upgrade of php :(
<SpamapS> koolhead17: durn it.
<SpamapS> koolhead17: there's code in the postinst to fix it .. so that surprises me.
<koolhead17> :(
<SpamapS> koolhead17: 	dpkg-maintscript-helper rm_conffile /etc/php5/conf.d/sqlite.ini 5.3.9~ -- "$@"
<SpamapS> koolhead17: that should have resulted in the file being renamed to /etc/php5/conf.d/sqlite.ini.dpkg-bak
<jamespage> I'm guessing there is no nice way to manage rolling restarts across service units within a deployed service?
<jimbaker> jamespage, seems like this could be done with a subordinate service
<jimbaker> (most management issues seem to boil down to, use a subordinate service)
<jamespage> jimbaker, hmm - so how would that work?
<jamespage> basically I need each service unit to
<jamespage> a) unload its regions
<jamespage> b) restart
<jamespage> c) reload its regions
<jamespage> in sequence
<jamespage> I keep breaking my hbase deployment when I do upgrade-charm at the moment :-)
<jimbaker> jamespage, the units of the subordinate services could coordinate with each other to manage this. the question i would ask, is there a lightweight mgmt solution out there that basically does something like this?
<jimbaker> writing one would not be so hard i suppose
<jimbaker> so basically map this new subordinate service, say rollingrestart, to such a solution whether custom or using some package out there
<jamespage> jimbaker, something service agnostic?
<jimbaker> jamespage, that would seem to be ideal - something that knows how to work with standard services
<m_3> jamespage: I was thinking we'd use multiple groups of slave services to do rolling upgrades... the complication is they'd be separate services"
<jamespage> m_3 that though had cross my mind
<m_3> datacluster1, datacluster2, datacluster3
<jamespage> lol
<m_3> they'd all be separately related to the master
<jamespage> lemme try that out
<m_3> it's another reason to avoid peer relations imo... i haven't figured out how to really get that done with multiple peer-groups
<m_3> master-slave seems doable tho
<mars> Hi, could someone tell me where I might find the juju logs on a new ec2 unit?
<mars>   I am trying to debug the system setup - the machine isn't coming up (or is taking a very long time)
<m_3> mars something like /var/lib/juju/units/<service-name>/charm.log
<mars> m_3, cool, which host?  zookeeping (machine 0) or the new unit? (machine 1)
<mars> zookeeper even
<m_3> typically machine1... lemme look if the bootstrap node keeps them in /var/log/juju or not
<jamespage> m_3: using dotdee is working out quite well
<mars> hmm, /var/lib/juju is empty on machine 1
<m_3> mars: yeah, bootstrap node in ec2 has them in /var/log/juju/{machine,provisioning}-agent.log
<jamespage> think I have discovered a bug tho - need to find kirkland
<m_3> jamespage: cool!
<SpamapS> jamespage: how do hbase users usually keep from breaking their cluster?
<jamespage> SpamapS, there is a special 'formula' for doing it with a couple of help scripts
<jamespage> SpamapS, http://hbase.apache.org/book/node.management.html
<mars> m_3, ah, darn.  Just realized machine 1 wasn't even provisioned (although it is pending in juju status).  bootstrap's /var/log/juju is empty.
<jamespage> basically you have to disable load balancing
<jamespage> restart each server in turn using the graceful_stop helper script
<jamespage> and turn it back on afterwards
<m_3> mars: other machines have /var/log/juju/machine-agent.log and then the charm log in /var/lib/juju/units/...
<mars> m_3, both directories are empty on the bootstrap machine.  That is a bit odd.
<m_3> mars: check that `dpkg -l | grep juju`
<mars> Obviously the data for wordpress machine 1 is somewhere in the system
<m_3> returns something nonempty
<SpamapS> jamespage: hbase is surprisingly similar to CEPH (or perhaps not, they attempt the same thing with different problem spaces)..
<m_3> mars: note that local (lxc) provider logs to different locations !!! :(
<mars> m_3, yep, running from the PPA, 0.5+bzr464-1juju2~oneiric1
<SpamapS> wow..
<SpamapS> there are like, 4 invented versions there
<SpamapS> 0.5, -1, juju2, ~oneiric1 ...
<jamespage> SpamapS, it would work with some sort of service level lock
<SpamapS> jamespage: zookeeper would be a nice way to facilitate that.. and you already have it available. :)
<mars> m_3, yes.  I gave up on lxc.  It messed with /cgroups/ to the point where I couldn't boot a container any more.  I'm trying ec2 now after reading that it was stable
<jamespage> SpamapS, amusingly hbase uses zookeeper for coordination
<SpamapS> jamespage: yeah thats what I mean.
<jamespage> but not on restarts
<jamespage> :-(
<jamespage> but that does get me thinking....
<SpamapS> jamespage: what does this graceful_stop.sh use to make things happen? ssh?
<jamespage> SpamapS, erm yes
<SpamapS> so its ssh in a for loop.. awesome.
<jamespage> common theme in hadoop stuff
<jamespage> forget init scripts
<jamespage> and service orchestration - run everything from the master using SSH
<SpamapS> jamespage: yeah I've noticed that hadoop seems to hate best operations practices
<SpamapS> jamespage: so, yeah, I think you need to then setup ssh keys to use upstream's preferred method
<mars> m_3, so juju status know about the request for a new machine - I assume that data should have been logged somewhere, or the command couldn't retrieve it.
<mars> m_3, Perhaps I could start by checking that the deploy request was logged correctly?
<jamespage> nooooooo
 * jamespage head in hands
<SpamapS> jamespage: and your upgrade-charm needs to include guards to prevent breaking the cluster.. by either ssh'ing into the main node to call graceful_stop .. or by using something a bit smarter like salt or fabric to roll these command executions around the right way
<SpamapS> jamespage: this is also what I've had to do on ceph.. and it kind of sucks. :-P
<SpamapS> jamespage: eventually it would be nice to have a "command bus" on top of juju's "config bus"
<jamespage> SpamapS, having multiple deployments of the hbase-slave charm against the same master works OK
<jamespage> but users can still break it
<SpamapS> jamespage: I think you can only make your charm refuse to break it... but one thing that sucks there is you might upgrade the charm and not know it didn't break the cluster in favor of not actually upgrading the software. ;)
<m_3> mars: sorry... got a hangout... gimme a few
<mars> np
<_mup_> juju/enhanced-relation-support r7 committed by jim.baker@canonical.com
<_mup_> More details
<jamespage> SpamapS, the other challenge is zookeeper
<jamespage> if I want to expand the quorum then I need to start all region servers and the master :-)
 * jamespage goes to look at the zk formulas repository
<SpamapS> jamespage: yeah, I think you're getting into the most difficult of services to orchestrate
<jamespage> SpamapS, w00t!
<SpamapS> jamespage: swift, I think, also gave adam_g fits because of the fairly simplistic things available to juju for orchestration
<m_3> mars: the definitive place that info is stored is zookeeper on the bootstrap node (check /var/log/juju and /var/log/zookeeper)
<mars> m_3, checking
<mars> m_3, all I see in the zookeeper log are a lot of java.nio.channels.CancelledKeyException and EndOfStreamException
<mars> in fact, all of zookeeper.log's information is about connection information, and nothing about data
<mars> plug a little bit of information about the zookeeper server startup at the top
<mars> *plus
<m_3> mars: sorry, let me back up a sec... you're running oneiric on ec2, using 'juju-origin: ppa' which catches juju-v464 on the units and you're running something like 464 on your precise(?) client?
<mars> m_3, oneiric client, 0.5+bzr457-1juju2~oneiric1
<m_3> mars: are you specifying default-instance-type and default-image-id?
<mars> m_3, ah, bootstrap machine is 0.5+bzr464-1juju2~oneiric1
<mars> m_3, no, I am not specifying either
<m_3> ok
<m_3> so this sounds like my standard setup... except maybe a little later client
<m_3> lemme check
<mars> I could start by updating my local juju package to 464
<m_3> yeah, running 463oneiric on my laptop, juju-origin: ppa picks up 464 on ec2 oneiric units
<mars> m_3, there also appears to be a problem with cloud-init on the bootstrap machine
<m_3> mars: please try that
<mars> last line in cloud-init.log on the bootstrap machine is:  [CLOUDINIT] cloud-init-cfg[ERROR]: errors running cloud_config [final]: ['scripts-user']
<mars> but first, a package upgrade
<m_3> and please destroy-environment then re-bootstrap
<mars> will do
<m_3> I'll recycle my env to make sure it hasn't broken in the last few hours
<mars> m_3, looking good this time, I have files in bootstrap:/var/log/juju
<m_3> yay
<SpamapS> m_3: another instance of bug 938463
<_mup_> Bug #938463: New agent requirement for session file breaks all existing bootstrapped envs using juju-origin PPA <juju:New> < https://launchpad.net/bugs/938463 >
<SpamapS> I think we might need to issue a fix of some kind where the session file argument is no longer required and just prints a warning "agent not restartable!"
<m_3> with a recommended action perhaps?
<SpamapS> Yeah
<SpamapS> "Upgrade your client!"
<m_3> although I hate to open the gates for 'please recycle your env'... that's just wrong
<SpamapS> Well.. frankly, all old clients "did it wrong"
<SpamapS> I wonder..
<SpamapS> we can, perhaps, fix this in packaging
<lifeless> rm -rf is not 'fixing'
 * SpamapS slowly lowers hand... n/m
<SpamapS> ;)
<SpamapS> I think I have a decent answer
<SpamapS> If you try to bootstrap or deploy with juju-origin: ppa .. we should probably check the PPA version and warn you if you are out of sync
<SpamapS> of course, that would require juju to actually know what its version is
<m_3> SpamapS: charmtester desperately needs 'juju --version'
<m_3> rather than dpkg | awk | sed
<m_3> BTW, true for juju-origin distro too
<jimbaker> SpamapS, i suppose for bug 938463, it would be nice if we had updated juju.state.topology.VERSION. arguably this is a valid usage
<_mup_> Bug #938463: New agent requirement for session file breaks all existing bootstrapped envs using juju-origin PPA <juju:New> < https://launchpad.net/bugs/938463 >
<_mup_> Bug #938899 was filed: juju needs a '--version' option <juju:New> < https://launchpad.net/bugs/938899 >
<SpamapS> jimbaker: hm
<SpamapS> jimbaker: would not help for people with an old client.. but it would help save the already-existing bootstrapped environments.
<SpamapS> I can't help but wonder if this parameter can just have a sane default though
<jimbaker> SpamapS, one option worth exploring is to at least inform the user of the juju client when there's a version mismatch between client and a specific envir
<jimbaker> i believe there's a bug out there on this
<mars> m_3, looks like everything is now working on ec2.  Thanks for the help.
<jcastro> SpamapS: m_3: should we explicitly say "no provider specific features" up front or do you want that as part of the review process still?
<SpamapS> jcastro: Perhaps we need a "tips" section that suggests that these will be judged negatively. If somebody does something unbelievably cool that requires SQS or ELB.. they could still win.
<jcastro> OTOH, it could be a good talking point
<jcastro> "man this is a good charm, but it's s3 specific, how can we make it work on other things in a way that doesn't suck?" will be a problem for us anyway
<mars> Another question for the room: I am writing a new charm and forgot to make the hooks chmod +x.  I made the change on my local system, but juju deploy appears to be using a cached (and broken) copy of my charm.  Can I force juju to take the updated version?
<SpamapS> mars: heh.. you need to bump the revision file number
<SpamapS> mars: there's a feature in development now to add '-u' to deploy so that it automatically updates the revision number
<mars> SpamapS, ok, thanks.  I was hoping to avoid that when in 'dev mode' :)
<mars> That sounds like it would work
<SpamapS> mars: yeah, its very close.. probably will land in a few days
<mars> Or even just 'deploy --force-update'
<SpamapS> hazmat: ^^ another person who finds deploy maddening. :)
<mars> hehe
<SpamapS> I think it should tell you "Deploying existing charm." or "Uploading new charm"
<hazmat> SpamapS, the fix for that is in the queue
<hazmat> SpamapS, we're short a reviewer this week (ben's on vacation)
<hazmat> fwiw the signature is 'deploy --upgrade'
<xerxas> can I halt the controller machine between deployements ?
<xerxas> I'm deploying 3 machines , I don't want to run one more that will idle
<SpamapS> hazmat: can we have a follow-up that also adds an INFO message to tell whether or not a charm is being uploaded/pulled from the repo? That would be super helpful.
<SpamapS> xerxas: actually.. I think the answer is.. "maybe"
<xerxas> ;)
<SpamapS> xerxas: you might break the juju agents if you do that.. so when you resurrect it, you would need to check on them.. nobody has tested that use case.
<xerxas> works for 3 nodes, but not 2 ?
<xerxas> because of zookeeper ?
<xerxas> SpamapS:  ok interesting
<SpamapS> xerxas: the ability to stop/start the agents only landed in trunk yesterday. ;)
<xerxas> ;)
<hazmat> SpamapS, already done
<hazmat> in that branch
<SpamapS> hazmat: .... you are the wind beneath my wings...
 * hazmat has a vision of icarus
 * SpamapS hopes he lands on something soft when the wax melts
<jcastro> SpamapS: m_3: you guys busy? I need a G+ for like, 3 minutes.
<m_3> jcastro: sure man
<jcastro> SpamapS: around?
<jcastro> m_3: well he's bailed, let's hang out with hazmat
 * jcastro starts the hangout
<jcastro> m_3: hazmat: http://expertlabs.aaas.org/thinkup-launcher/
<m_3> https://bugs.launchpad.net/charms/+bugs?field.tag=new-charm
<SpamapS> jcastro: sorry I had a conflicting hangout
<jcastro> it's ok we changed a bunch of charm policy without you
<jcastro> j/k
#juju 2012-02-23
<_mup_> Bug #939164 was filed: add support to lxc provider backend in go port <juju:New> < https://launchpad.net/bugs/939164 >
<_mup_> juju/enhanced-relation-support r8 committed by jim.baker@canonical.com
<_mup_> Use cases and impl details
<xerxas> hi all
<xerxas> hazmat: I got juju mostly working on osx, thanks to the brew formula you sent me
<xerxas> I have a problem though
<xerxas> Â» juju deploy --repository . local:cf-redis
<xerxas> 2012-02-23 11:31:58,200 INFO Connecting to environment...
<xerxas> 2012-02-23 11:32:02,619 INFO Connected to environment.
<xerxas> [Errno 21] Is a directory
<xerxas> 2012-02-23 11:32:02,822 ERROR [Errno 21] Is a directory
<xerxas> what i did is : bzr branch lp:charms/cf-redis
<xerxas> am I suppose to tell juju to deploy a file ?
<benji> It is possible to stop an EC2 juju environment and then restart it later?  If so, is there a particular procedure for doing it correctly?
<benji> It looks like it might work, but the fact that restarted machines get new IPs is keeping the zookeeper machine from being able to connect to the others.
<benji> I wonder if it would work if I update the IPs in ZK.
<fwereade_> xerxas, a repository is a directory containing subdirs for individual ubuntu series; the charms for a particular series go in the appropriate subdir
<hazmat> xerxas, juju wants a particular repository structure.. ie.. try this.. mkdir  -p myrepo/oneiric && cd myrepo/oneiric && bzr branch lp:charms  && cd ..
<fwereade_> hazmat, that's ludicrous timing
<hazmat> whoops cd../..
<hazmat> fwereade_, indeed it is.. serendipity.. good morning/afternoon
<xerxas> hazmat:  I think that's pretty much what I have ...
<fwereade_> hazmat, good afternoon/morning :)
<xerxas> but I'll retry
<hazmat> xerxas, try it with juju -v deploy  as well.. it should give the full traceback on errors then
<benji> It is possible to stop an EC2 juju environment and then restart it later?  If so, is there a particular procedure for doing it correctly?
<hazmat> benji, its not supported..
<benji> darn
<hazmat> benji, with the restart support fwereade_ added, its concieveable..
<hazmat> benji, but the ip address changes are a bit more problematic
<hazmat> hmm
<benji> I don't think it's too far off.  The main problem would appear to be that the machine's IP addresses change.
<benji> the machine IDs are consistent so refreshing the IPs seems plausible
<hazmat> benji, yeah.. we could tell the agents to go fetch zk servers directly from the s3 block
<hazmat> benji, right, but outside of the provisioning agent, none of the agent is supposed to have provider credential access
<hazmat> so machine id by itself isn't helpful..
<hazmat> we'd have to modify the provisioning agent to check/update those addresses in the s3 bucket at startup, which isn't so bad
<hazmat> benji, and the actual shutdown/resume would be done out of band
<jcastro> hazmat: #juju-dev is now logged
<benji> unfortunately, we have a charm that takes a *long* time to set up (1+ hours), and it has to run on an m1.large instance but we don't really want to have to run it 24-7 when we're only in development
<jcastro> have you looked at using LXC while developing?
<benji> we've been using LXC too, but the charm uses LXC itself, and since LXC isn't turtles-all-the-way-down yet, we couldn't keep using it
<jcastro> ah
<hazmat> jcastro, thank you
<gary_poster> m_3, hi.  Please consider this a gentle nag to complete our charm review. :-)  Thank you.
<jcastro> ok guys, I've got approval to launch this bad boy: https://juju.ubuntu.com/CharmContest
<jcastro> writing up the blog post now.
<jcastro> so if you have buddies that want to win some Amazon gift cards ...
<_mup_> Bug #939552 was filed: Juju should support MAAS as a provider <juju:New> < https://launchpad.net/bugs/939552 >
<TREllis_> Hi, I'm trying to run juju with lxc on precise. I've got to the stage where it's running the lxc-start command and starting the container, but the charm (any) remains pending
<TREllis_> after connecting with lxc-console I can see that the agent isn't running
<TREllis_> trying to start it manually I see or looking in the /var/log/juju logs, I see that the command arguments passed to the python module are invalid... -> http://paste.ubuntu.com/854086/
<hazmat> TREllis_, that looks like a version disconnect.. newer version on host, older version on the container.. there where some recent changes to how agents get started
<hazmat> TREllis_, are you using the ppa on the host or precise packages?
<TREllis_> hazmat: ah-ha.
<hazmat> TREllis_, you can specify what the container/launched machine uses for a version with the juju-origin setting in environments.yaml..  a value of  'ppa' should do it
<TREllis_> hazmat: I'm using precise+ppa on host, 0.5+bzr464-1juju2~precise1 which is the ppa version afaik. I see the archive has a slightly older one in precise. That'll be the issue. I guess that can be fixed by juju-origin: ppa in environments.yaml?
<TREllis_> ah yeah that :)
<hazmat> it should be autodetecting the value for juju-origin
<hazmat> the correct value that is
<TREllis_> hazmat: ah ok, right, I had it set manually to distro. My bad
<TREllis> hazmat: thanks!
<cliff-hm> jcastro, is the blog post published? If so, url?
<jcastro> http://cloud.ubuntu.com/2012/02/juju-charm-contest-help-bring-free-software-into-the-cloud/
<jcastro> https://juju.ubuntu.com/CharmContest
<jcastro> complete rules there
<TREllis> hazmat: yep "state: started" that was it - thanks!
<cliff-hm> thank you :)
<m_3> gary_poster: thanks, will do
<SpamapS> hazmat: IMO, this is somewhat madness.. determining where to get juju from where juju was installed from. I think its time to start looking at having bootstrap freeze and distribute the version of juju on the client in file storage.
<gary_poster> thank you
<m_3> jcastro: whoohoo!
<jcastro> heya m_3, no pressure on that summit charm, hahah
<m_3> jcastro: unhuh... :)
<hazmat> SpamapS, i don't think its madness, but distributing juju via provider storage has a lot of merit as well for cross distro release environment upgrades
<jcastro> http://news.ycombinator.com/item?id=3625153
<jcastro> shooting for my first submission to HN!
<SpamapS> hazmat: well at least if we do use apt to try and determine where juju was installed from, we should pin the version
<hazmat> SpamapS, and if that version isn't avail anymore?
<SpamapS> hazmat: you get a clear failure
<SpamapS> instead of this "oops something weird went wrong"
<SpamapS> ERROR: your client version is not available in your specified juju-origin (ppa)
<hazmat> SpamapS, you don't though.. you just get a started machine without a running agent.. this applies to the provisioning agent launching machines for new service units as well
<SpamapS> hazmat: no I want the client to ERROR
<hazmat> i guess that doesn't apply if its already in storage
<marcoceppi> Upvoted!
<hazmat> but for apt cases it does
<SpamapS> hazmat: right... so much simpler if we make juju responsible for deploying itself
<SpamapS> hazmat: I acknowledge that this is *not* a tiny or super simple change... but I think its going to be necessary.
<hazmat> SpamapS, that's not making 12.04..
<SpamapS> oh snap... I just saw an ad for season 2 of game of thrones and got goose bumps
<SpamapS> <--- nerd
<SpamapS> hazmat: 12.04 is *supposed* to be done.
<SpamapS> hazmat: feature freeze was a week ago. The release team can tell us togo to hell at this point. ;)
<charmbot> Project oneiric-ec2-charm-bitlbee build #1: FAILURE in 1.3 sec: http://ec2-67-202-62-13.compute-1.amazonaws.com:8080/job/oneiric-ec2-charm-bitlbee/1/
<charmbot> Project oneiric-local-charm-bitlbee build #1: FAILURE in 1.8 sec: http://ec2-67-202-62-13.compute-1.amazonaws.com:8080/job/oneiric-local-charm-bitlbee/1/
<jcastro> robbiew: I love how no one noticed that mmims shaved the mountain man face.
<robbiew> lol...eh...I've seen him both ways....now if he shaved HIS HEAD, I'd noticed
<jamespage> m_3: love the charmbot!
<m_3> jamespage: ha!  I was just looking at the time to see if I could catch you
<jamespage> m_3, not for long...
<m_3> I'm getting build-publisher login barfs... let's go PM
<m_3> BTW, there'll be some noise in the channel for a bit while testing tests
<charmbot> Project oneiric-ec2-charm-bitlbee build #2: STILL FAILING in 0.88 sec: http://ec2-67-202-62-13.compute-1.amazonaws.com:8080/job/oneiric-ec2-charm-bitlbee/2/
<jiltdil> Please tell me  what is charm contest?
<m_3> jiltdil: https://juju.ubuntu.com/CharmContest
<jiltdil> m_3,  Thanks
<m_3> sure thing
<jiltdil> ok
<charmbot> Project oneiric-ec2-charm-bitlbee build #4: STILL FAILING in 0.36 sec: https://jenkins.qa.ubuntu.com/job/oneiric-ec2-charm-bitlbee/4/
<SpamapS> charmbot: YOU ARE SO AWESOME!
<charmbot> SpamapS you may not issue bot commands in this chat!
<SpamapS> tho.. a bit cold
<m_3> SpamapS: :)
<m_3> SpamapS: we can enable kick-offs through the bot too
<SpamapS> "...we are being robbed of an art form as ancient as computers themselves."
<SpamapS> http://tatiyants.com/devops-is-ruining-my-craft/
<SpamapS> a must read for anyone
<m_3> nice
<koolhead17> gosh we have a bot here now in channel?
<m_3> koolhead17: yup, another one... mup's for bugs, charmbot's for tests
 * koolhead17 says hi to charmbot
<koolhead17> m_3: cool!!
<m_3> the links are still not live, but they should be tomorrow europe-time
<m_3> hazmat: frickin love snapshot restore!!
<hazmat> m_3, nice!
<hazmat> m_3, i was figuring we could make some minor additions to the loader to have it work against status output would also be a nice tool
<hazmat> m_3, the only major limitation i can think of on the snapshot restore, is if you start having relations between services created after the snapshot and those before, then it breaks isolation
<m_3> hazmat: and sort of an impl of stacks :)
<m_3> right
<hazmat> m_3, i don't know what those are ;-)
<m_3> gotta snapshot early in the process
<hazmat> m_3, yeah.. it probably makes sense to have a reset option just to go back to pristine..
<m_3> but that really does smell like a set of staged commits... that can be rolled back even
<hazmat> which is actually the more common usage for testing
<m_3> hazmat: true
<m_3> you want MPs or you wanna make it owned by a group instaed of just you?
<m_3> (path issues)
<m_3> loaders trying to read plans from a different place than the planner put them
<m_3> I can split it out into separate commands... but...
<hazmat> jamespage, those charms are looking pretty good
 * hazmat pokes out at the hbase and zk charms
<hazmat> jamespage, just curious if you where going to split out the hbase into hdfs nodes as well (which is already extant i thought).. or does it make more sense to just run those inline for locality
<hazmat> m_3, mps
<hazmat> m_3, or a group is fine..
<hazmat> m_3, any suggestions on group?
<hazmat> maybe juju-jitsu
<jamespage> hazmat: I was pondering the same thing myself (splitting the hdfs nodes)
<jamespage> that way they could also be shared with mapreduce.
<jamespage> hazmat: rolling restarts of the hbase-slave nodes is not great ATM - I'm working around the issue with a staggered sleep to stop them loosing the ROOT and META regions.
<hazmat> jamespage, hmm.. yeah.. we need a good recipe for doing that with peer relations to coordinate a rolling op
<m_3> hazmat: average cluster size is 140 datanodes... goes up to thousands... peer relations would just explode
<m_3> hazmat: juju-jitsu's fine for charmrunner
<jamespage> m_3: I think thats where its important to break a cluster up into multiple service instances
<jamespage> datacluster1, datacluster2 etc...
<m_3> right
<jamespage> they can all still talk to one master service
<m_3> exactly... that should scale nicely
<m_3> even to thousands of nodes
<jamespage> but it allows you to manage them as descrete regions in a deployment
<m_3> and roll
<m_3> not sure what the price would be as far as data integrity...
<m_3> first thought is to shard across these datasets,
<m_3> but then that wouldn't really support a rolling upgrade
<m_3> we need mirroring too
<hazmat> m_3, explode is a strong word.. they'd  work fine, but they'd be chatty, but if your talking about lock step distributed coordination.. it is what it is.
<m_3> hazmat: :)
<hazmat> m_3, it might be a little nicer with an explicit leader election which would be alot less chatty.. but even then we'd need to communicate that and allow for the next leader, which would again be via peer rel data
<m_3> hazmat: would love to see that realized with a single leader
<m_3> or ideally cascading groups of local leaders
<m_3> but it gets pretty big pretty quick if pure peers have to talk to everybody else
<m_3> hazmat: the good news is that there's nice global seqn number that's ordered and easily divided
 * m_3 flashbacks to PE classes... duck duck goose
<hazmat> m_3, that works for a more static leader based on the presence/availability of a node, but for a transient leader, effectively a lock around a rolling operation, feels like it wants for a different primitive.
<charmbot> Yippie, build fixed!
<charmbot> Project oneiric-local-charm-bitlbee build #2: FIXED in 42 sec: https://jenkins.qa.ubuntu.com/job/oneiric-local-charm-bitlbee/2/
<m_3> whoohoo!
<charmbot> Project oneiric-local-charm-byobu-classroom build #1: FAILURE in 6.5 sec: https://jenkins.qa.ubuntu.com/job/oneiric-local-charm-byobu-classroom/1/
<charmbot> Project oneiric-local-charm-cassandra build #1: FAILURE in 6.3 sec: https://jenkins.qa.ubuntu.com/job/oneiric-local-charm-cassandra/1/
<charmbot> Project oneiric-local-charm-ceph build #1: FAILURE in 6.3 sec: https://jenkins.qa.ubuntu.com/job/oneiric-local-charm-ceph/1/
<charmbot> Project oneiric-local-charm-cf-mongodb build #1: FAILURE in 6.2 sec: https://jenkins.qa.ubuntu.com/job/oneiric-local-charm-cf-mongodb/1/
<charmbot> Project oneiric-local-charm-cf-mysql build #1: FAILURE in 6.2 sec: https://jenkins.qa.ubuntu.com/job/oneiric-local-charm-cf-mysql/1/
<charmbot> Project oneiric-local-charm-cf-redis build #1: FAILURE in 6.2 sec: https://jenkins.qa.ubuntu.com/job/oneiric-local-charm-cf-redis/1/
<charmbot> Project oneiric-local-charm-cloudfoundry-server build #1: FAILURE in 6.7 sec: https://jenkins.qa.ubuntu.com/job/oneiric-local-charm-cloudfoundry-server/1/
<charmbot> Project oneiric-local-charm-cloudfoundry-server-dea build #1: FAILURE in 6.4 sec: https://jenkins.qa.ubuntu.com/job/oneiric-local-charm-cloudfoundry-server-dea/1/
<charmbot> Project oneiric-local-charm-couchbase build #1: FAILURE in 6.4 sec: https://jenkins.qa.ubuntu.com/job/oneiric-local-charm-couchbase/1/
<charmbot> Project oneiric-local-charm-couchdb build #1: FAILURE in 6.1 sec: https://jenkins.qa.ubuntu.com/job/oneiric-local-charm-couchdb/1/
<charmbot> Project oneiric-local-charm-etherpad-lite build #1: FAILURE in 6.4 sec: https://jenkins.qa.ubuntu.com/job/oneiric-local-charm-etherpad-lite/1/
<charmbot> Project oneiric-local-charm-ganglia build #1: FAILURE in 6.5 sec: https://jenkins.qa.ubuntu.com/job/oneiric-local-charm-ganglia/1/
<charmbot> Project oneiric-local-charm-hadoop-mapreduce build #1: FAILURE in 6.5 sec: https://jenkins.qa.ubuntu.com/job/oneiric-local-charm-hadoop-mapreduce/1/
<charmbot> Project oneiric-local-charm-hadoop-master build #1: FAILURE in 6.7 sec: https://jenkins.qa.ubuntu.com/job/oneiric-local-charm-hadoop-master/1/
<charmbot> Project oneiric-local-charm-hadoop-slave build #1: FAILURE in 6.2 sec: https://jenkins.qa.ubuntu.com/job/oneiric-local-charm-hadoop-slave/1/
<charmbot> Project oneiric-local-charm-haproxy build #1: FAILURE in 6.2 sec: https://jenkins.qa.ubuntu.com/job/oneiric-local-charm-haproxy/1/
<charmbot> Project oneiric-local-charm-jenkins build #1: FAILURE in 6.2 sec: https://jenkins.qa.ubuntu.com/job/oneiric-local-charm-jenkins/1/
<charmbot> Project oneiric-local-charm-jenkins-slave build #1: FAILURE in 6.2 sec: https://jenkins.qa.ubuntu.com/job/oneiric-local-charm-jenkins-slave/1/
<charmbot> Project oneiric-local-charm-limesurvey build #1: FAILURE in 6.2 sec: https://jenkins.qa.ubuntu.com/job/oneiric-local-charm-limesurvey/1/
<charmbot> Project oneiric-local-charm-mediawiki build #1: FAILURE in 6.7 sec: https://jenkins.qa.ubuntu.com/job/oneiric-local-charm-mediawiki/1/
<charmbot> Project oneiric-local-charm-membase build #1: FAILURE in 6.4 sec: https://jenkins.qa.ubuntu.com/job/oneiric-local-charm-membase/1/
<charmbot> Project oneiric-local-charm-memcached build #1: FAILURE in 6.4 sec: https://jenkins.qa.ubuntu.com/job/oneiric-local-charm-memcached/1/
<m_3> hmmm... these'll clear through in a sec... sorry gang
<charmbot> Project oneiric-local-charm-minecraft build #1: FAILURE in 6.2 sec: https://jenkins.qa.ubuntu.com/job/oneiric-local-charm-minecraft/1/
<charmbot> Project oneiric-local-charm-mongodb build #1: FAILURE in 6.2 sec: https://jenkins.qa.ubuntu.com/job/oneiric-local-charm-mongodb/1/
<charmbot> Project oneiric-local-charm-munin build #1: FAILURE in 6.3 sec: https://jenkins.qa.ubuntu.com/job/oneiric-local-charm-munin/1/
<m_3> I'll move them out to ##charmbot-test while debugging
<jimbaker>  bcsaller, can you take a look at https://code.launchpad.net/~hazmat/juju/env-from-env/+merge/93321 ?
<jimbaker> it's really a trivial, and will help m_3 help out on his charm testing
<bcsaller> looking
<bcsaller> approved
<jimbaker> bcsaller, thanks!
<hazmat> m_3, yeah.. not sure that bot should ever be round these parts
<hazmat> its rather distracting ;-)
<m_3> hazmat: yup... any suggestions for where it should live?
<m_3> we need ways to notify charm authors when stuff is breaking
<hazmat> m_3, i would think email for the charm author, and either a secondary mailing list or irc channel
<hazmat> m_3, it may not always be valid or available, but the bzr commits do contain some identity information, its somewhat freeform, but 98% it appears to be a valid email, minus some of yours ;-)
<hazmat> m_3, better would be an explicit email/maintainer field for a charm
<SpamapS> I actually do think the bot should be here... but only notify us of changes in state after the tests are stable.
<SpamapS> hazmat: I like having it here. This is really where users and charmers go to discuss charms. Raises awareness.
<hazmat> SpamapS,  stable means not filling spamming the channel?
<SpamapS> hazmat: yeah I think that is more a product of the tests still being very new.
<SpamapS> once they're passing reliably over and over and over, having them here is a great way to raise awareness of their existence.
<m_3> gotcha... well I'll leave them in ##charmbot-test until stable... then move them back
<jcastro> SpamapS: we need to talk slides for our webinar tomorrow
<jcastro> let me rephrase, we should talk tomorrow about our slides, before the webinar rehearsal on tuesday
<SpamapS> jcastro: thank you, heart attach averted
<SpamapS> attack even
<SpamapS> hmm
<koolhead17> jcastro: around
<jcastro> yeah for a few minutes
<koolhead17> jcastro: http://atuljha.com/blog/2012/02/24/newbies-guide-to-joining-cloud-ecosystem/
<koolhead17> what do u think of this :P
<SpamapS> all: I'd appreciate a final review/+1 for the tests spec here: https://codereview.appspot.com/5624044/diff/7001/source/charm-tests.rst
<SpamapS> Hmm
<SpamapS> hazmat: was looking at your 'env-from-env' branch, and thinking, what about JUJU_REPOSITORY to allow setting the repository in the env too? That would basically eliminate the need for a wrapper to run the tests (assuming we can also override the default charm namespace in a similar fashion)
<hazmat>  SpamapS yeah... that's seems pretty reasonable
<hazmat> SpamapS, i think we had discussed with niemeyer_ possibly sticking that in environments.yaml
<hazmat> SpamapS, i don't know that would remove the need for the wrapper
<hazmat> SpamapS, since the wrapper is still obviating the need for specifying the explicit namespace on the cli.. ie local:charm
<SpamapS> hazmat: I also want to add a JUJU_CHARM_NAMESPACE too :)
<hazmat> SpamapS, fail ;-)
<SpamapS> hazmat: that way the tests are really just running with a cultivated environment, rather than some special wrapper
<hazmat> SpamapS, yeah.. that one is a little more odd, i'd really have to defer to the tastemaster
<SpamapS> lol
<hazmat> SpamapS, but wouldn't you have the wrapper even with that for the extra commands
<SpamapS> hazmat: removed the extra commands for now
<hazmat> which i still should be a different command to avoid confusion.
<hazmat> SpamapS, cool
<hazmat> SpamapS, what about using environment fixtures for loading
<hazmat> i guess its equiv to the cli.. but it would give a more defined data structure for us to manipulate the cultured environment
<SpamapS> I think both should be possible, but doing it in a static file is not helpful to automated testing... just increases the crap we have to cleanup/edit on tests.
<hazmat> SpamapS, yeah.. its a little too manicured for ad hoc test scenario construction
<SpamapS> hazmat: eventually repository definitely should be in ~/.juju somewhere .. so people can just share that dir or environments.yaml and get the same result
<hazmat> SpamapS, that's seems a bit like arbitrary structure
<SpamapS> hazmat: ESHORTHAND .. what?
#juju 2012-02-24
<_mup_> Bug #939932 was filed: Should be able to set --repository from system environment <juju:New> < https://launchpad.net/bugs/939932 >
<hazmat> SpamapS, why does my repo need to be in ~/.juju
<SpamapS> hazmat: I mean your pointer to repository should be able to be specified from a config file somewhere in ~/.juju
<hazmat> SpamapS, ah.. yeah.. that sounds good
<SpamapS> hazmat: I'm not entirely sure environments.yaml is the right place
<SpamapS> hazmat: one thought I had.. environments should actually be usable as repositories
<SpamapS> long term anyway :)
 * hazmat notes its beer o clock around here
<SpamapS> hazmat: yageshegedra
<_mup_> Bug #939944 was filed: Default Charm namespace should be overridable by environment variables. <juju:In Progress by clint-fewbar> < https://launchpad.net/bugs/939944 >
 * SpamapS is getting the hang of lbox.. ;)
<locke105> is there a way to use juju without an amazon S3-like store? looking for a way to deploy to an openstack setup but I don't want setup a Swift module if i dont have to
<_mup_> juju/enhanced-relation-support r9 committed by jim.baker@canonical.com
<_mup_> Cleanup
<ejat> can someone review lp:~fenris/charms/oneiric/symfony/trunk
<ejat> im having a problem when i run the install script manuall on ec2 .. its works ..
<ejat> but when i use juju deploy ...
<ejat> it can be view / test on ec2
<ejat> !ping SpamapS & hazmat
<ejat> !ping jcastro
<m_3> ejat: hey, so take a look at the logs on the service unit
<m_3> ejat: they're found in /var/lib/juju/units/<unit>/charm.log
<m_3> that should help debug this a bit
<ejat> m_3: ok thanks ..
<m_3> ejat: I'm spinning it up now, but I created a new Bug #940140
<_mup_> Bug #940140: Charm needed: Symfony <Juju Charms Collection:New> < https://launchpad.net/bugs/940140 >
<m_3> please grab it and attach your branch... it pops up on our review queue if you also add the tag: 'new-charm'
<ejat> ok thanks m_3
<ejat> ill try to update the bugs accordingly
<m_3> np, it's just a good place to keep track of conversation about it... other than irc logs :)
<m_3> ejat: it just came up to a 'started' state for me
<m_3> ejat: what problems are you seeing?
<ejat> yeah .. show started .. but when i open the public dns .. its show nothing ..
<ejat> rather than i manually deploy using the script ..
<m_3> are you "exposing" the service after deploying?  i.e., 'juju expose symfony'
<m_3> (it's firewall stuff)
<ejat> m_3: yeah .. did it also ...
<ejat> # Make it publicly visible, once the symfony service is exposed
<ejat> open-port 80/tcp
<ejat> inside the install
<m_3> there's an additional command that runs from your client
<m_3> juju bootstrap
<m_3> juju deploy ...
<m_3> juju expose symfony
<m_3> it's designed to be a controlled/gated network exposure for a whole stack of services
<ejat> yeah .. only done that 3 command ..
<m_3> ejat: hmmm... if juju status shows that the service unit is 'started' and lists 'open-ports:', then I'm not sure what's going on
<m_3> I'm getting a 'It works!' from apache and a directory listing when I hit http://<ec2-addr>/symfony
<ejat> http://<ec2-addr>/symfony/web
<m_3> right, that's showing the pretty 'project created' page:  http://ec2-107-22-14-139.compute-1.amazonaws.com/symfony/web/
<m_3> maybe it's been upgraded and is a bit stale?  you might try recycling your environment if nothing else is running
<ejat> your instance on precise ?
<ejat> or oneiric ?
<ejat> m_3: if u see that .. mean my charm work :)
<ejat> or i should change my     default-series: oneiric
<ejat> in my juju environments
<m_3> ah, sorry... I was assuming you were using oneiric instances because your branch had charms/oneiric/symfony
<m_3> ejat: yes, it looks like the install part of the charm works fine
<m_3> I'd recommend changing to 'default-series: oneiric' in your ~/.juju/environments.yaml file
<ejat> i already put that in my environment
<m_3> cool
<ejat> can u try http://ec2-46-137-225-47.ap-southeast-1.compute.amazonaws.com/symfony/web
 * ejat confuse .. y u getting the pretty page .. but i am not :(
<ejat> m_3: can u view it ?
<m_3> ejat: I get a php file
<m_3> but it's not viewable
<ejat> yeah .. thats what i get too ..
<m_3> this is on an oneiric instance?
 * ejat wondering y u can get it :)
<ejat> yups .
<ejat> just now u launch it on what instance?
<m_3> try to 'juju ssh symfony/0'
<m_3> and then 'sudo apache2 restart'
<m_3> see if that fixes it
<m_3> I've only run it on oneiric
<ejat> 0 or 1 ?
<m_3> sorry, that should be 'sudo service apache2 restart'
<m_3> whichever unit you've got started
<ejat> yeah its works
<ejat> after restart
<ejat> can u retry
<m_3> I bet you need an extra restart after you make the symlink for sf
<ejat> owh okie ..
<m_3> yes, I see the pretty 'project created' page
<ejat> \0/
<m_3> cool
<ejat> but need someone to review the code .. make it more clean ..
<ejat> :)
<m_3> sure... we can put it through the usual review process.  When you're ready just add the 'new-charm' tag to that bug
<m_3> and I or somebody in the group will review the branch
<m_3> not right now though :)... back to sleep for me
<ejat> thanks so much m_3
<m_3> glad to help
<ejat> see ya soon .. or maybe in uds-q .. :)
 * ejat also feel \\0/ can contribute some charm as state within the uds-p milestone .. 
<ejat> trying to submit other charm if possible .. gtg to0 ...
<jamespage> hazmat, m_3: I'd really like to refactor the hadoop charm into mapreduce and hdfs charms - any objections?  I'll leave the current hadoop-master/hadoop-slave as is
<hazmat>  james_w sounds good to me
<hazmat> er..
<hazmat> jamespage, +1
<jamespage> lol
 * hazmat checkouts  ironfan
<jamespage> hazmat, it would mean that the hbase-master charm could be multi-master much more easily
<SpamapS> jamespage: that sounds great! :)
 * jamespage is running mapreduce jobs on his pandaboard this morning :-)
<hazmat> jamespage, nice!  i've been meaning to touch base someone about juju on arm
<hazmat> w/
<hazmat> jamespage, which jvm?
<jamespage> hazmat: the only one for armhf (openjdk)
<jamespage> now that the thumb2 port has landed as default in openjdk-6 its actually OK
<jamespage> and passes the TCK
<m_3> jamespage: splitting hadoop makes sense to me
<m_3> jamespage: I also like Clint's approach (ceph and mysql) to a single charm that can be deployed into different roles
<m_3> jamespage: either way, I'm not particularly fond of the names "-master" and "-slave"
<m_3> professionally, that is :)
<jamespage> m_3: remind me how that works again?
<m_3> sure... so consider the simple one... mysql
<m_3> juju deploy ... local:mysql masterdb
<m_3> juju deploy ... local:mysql slavedb
<m_3> juju add-relation masterdb:master slavedb:slave
<m_3> probably better named something like 'replicaset1' or something... but I'm cutting/pasting
<jamespage> ah - I see
<jamespage> nice
 * jamespage goes to refactor his charms
<m_3> yeah, it's really elegant
<m_3> but shhhhhh... we don't want it to go to his head :)
<m_3> jcastro: review queue length back down to 1 (I'll catch saltstack next week)
<jcastro> \o/
<jcastro> m_3: was rick's thing hard or easy?
<m_3> jcastro: pretty straightforward... just some basic updates to the rails charm
<jcastro> k
<jcastro> have you had a chance to look at summit yet?
<m_3> I'll put the rails one up for review soon... wanted to do some cleanup before showing it to rails peeps at conferences
<m_3> summit's next on the list today
 * jcastro nods
<m_3> the good news is we actually have a django charm
<jcastro> hey alright!
<m_3> like the rails/node charms, this one will deploy a django app straight from a bzr repo
<jcastro> that's basically awesome
<m_3> so hopefully it'll go quick
<jcastro> it gives you a good story for those language conferences
<m_3> haven't really checked the state of the charm though
<m_3> might need some updating / completion... it was never put up for inclusion into the store for some reason
<m_3> written by... Michael Nelson
<m_3> looks like great work... just don't know if he finished or not
<m_3> I'll find out today/tomorrow
<gary_poster> m_3, thanks for review!  We'll get to the fixes
<m_3> gary_poster: thanks for the charm!  please do branch lp:charm-tools too when y'all get a chance... y'all have great stuff for a charm-helpers-py!
 * m_3 too many "y'all"s in one place... I guess I'm feeling very Texan atm
<m_3> my fav is still "y'all're".... which is actually more characters than "you are"... which defeats the purpose of a contraction... harumph
<SpamapS> m_3: but it rolls of the tongue so nicely... like y'all're just a rollin in thu mud on a sundee
<m_3> :)
<jamespage> m_3: that works really nicely
<jamespage> thanks for the tip!
<jamespage> m_3, http://bazaar.launchpad.net/~james-page/charms/precise/hadoop-hdfs/trunk/view/head:/README
<m_3> jamespage: awesome... that looks great
<jamespage> m_3: certainly reduces the amount of code replication across the two charm approach (most of it is the same)
<m_3> we can add additional namenodes too... not sure what to call that relation tho :)
 * SpamapS is all for charm consolidation whenever possible
<m_3> yeah, buildbot ended up using cross-charm links to reduce code-repl
<SpamapS> I've been thinking about a streamlined approach to configuring the various roles.. I wonder if we could add a --role=slave and that would pull $charmroot/roles/slave.yaml as the default values to override the usual defaults..
<m_3> consolidation would work well for them
<SpamapS> This happens with ceph really badly.. you don't know if its a single node ceph tester, or a 100 node ceph buildout, until the 2nd unit is added...
<m_3> SpamapS: including sample deploy scripts into the readme works well for that imo
<SpamapS> m_3: yeah, thats the way right now
<SpamapS> I'm getting way too far ahead of myself with the role thing. :)
<m_3> they had a $CHARM_DIR/samples/ directory with various config scenarios
<m_3> hong-kong-phooey
 * m_3 makes bruce lee fighting sounds
<m_3> jamespage: a namenode-cluster gets complicated with the relations though... I'd hvae to do that one on the whiteboard :)
<jamespage> m_3: yeah - in 1.0.0 it is a single point of failure TBH
<m_3> w/ a lot more coffee
<jamespage> its possible with DRBD and stuff
<m_3> I thought they added that in 205
<jamespage> m_3: nah
<m_3> or was that for the 0.23 line
<SpamapS> I'm more and more convinced that some charms need a config item that is "wait for relationX" so that you can just skip configuration until relationX is established, so that you don't get into the situation we see with nova where it stores in sqlite, then mysql
<jamespage> yep
<m_3> ah, gotcha... my bad
<jamespage> SpamapS, +1 to that
<SpamapS> I need to return to the ceph charms so I can demonstrate that..
<SpamapS> actually using that approach would probably allow easy mysql ring replication for the mysql charms too
<m_3> SpamapS: or some way to know that the relation is actually completed before then doing a 'juju set --config ' from an outside script
<SpamapS> m_3: hm, I think thats another approach, but one that will be more complicated and not yield better results
<jamespage> other than using a full config management solution like puppet does anyone have a nice way of determining whether a restart of a service is required?
<jamespage> I though about checksumming the config files and only restarting if stuff had changed.
<m_3> jamespage: hmmm... no clue
<m_3> that might be useful as an upstart primitive though.... service xxx status
<SpamapS> jamespage: I do that in mysql
<jamespage> SpamapS, I'll take a look - just trying to avoid blind restarts
<jamespage> 'something might have changed'
<SpamapS>         if oldhash != tmd5.digest():
<SpamapS>             os.rename('/etc/mysql/my.cnf','/etc/mysql/my.cnf.%s' % md5.hexdigest())
<SpamapS>             os.rename(t.name, '/etc/mysql/my.cnf')
<SpamapS> jamespage: with mysql is pretty important to only restart when critical configs are changed. ;)
<jamespage> ack
<jamespage> now to refactor the hbase-* charms into a single charm
<_mup_> Bug #940492 was filed: Charm metadata must support subordinates and container relations <juju:New> < https://launchpad.net/bugs/940492 >
<_mup_> Bug #940498 was filed: Juju must provide the implict juju-info relation <juju:New> < https://launchpad.net/bugs/940498 >
<_mup_> juju/enhanced-relation-support r10 committed by jim.baker@canonical.com
<_mup_> More work on spec
#juju 2012-02-25
<george_e> I've got a rather strange issue here - it seems like Juju is completely ignoring the default values I have set in config.yaml.
<george_e> If I interactively debug the config-changed hook and try to retrieve a config value, it doesn't return the default - it's empty.
<george_e> Any ideas?
<SpamapS> george_e: what version of juju?
<SpamapS> george_e: a few bugs around that have been fixed in the last 2 months.
<george_e> SpamapS: Apparently 0.5+bzr467.
<SpamapS> george_e: ok well then that is odd.
<SpamapS> george_e: unfortunately you've caught me just before passing out from exhaustion.. but please do come back and ask again. :)
<george_e> I've just upgraded to the latest version and I've started the environment from scratch.
<george_e> SpamapS: Yeah, I'll be here tomorrow.
<SpamapS> george_e: also it might work to ask on askubuntu.com and tag with 'juju' .. we read those periodically and they tend to produce nice long-lived answers.
 * SpamapS passes out
<george_e> SpamapS: I can definitely do that.
<george_e> I use Ask Ubuntu all the time :P
<shafiqissani> If I want to deploy a single charm ... why is the bootstrapped node necessary ? the extra node is a huge waste of resources... Is there any possibility to deploy a charm without it ?
<george_e> Juju seems to be broken - all of the charms I deploy get stuck at the 'pending' stage.
 * george_e downgrades to 0.5+bzr467.
<george_e> It's still not working :(
<george_e> Time for a serious downgrade to the version packaged in the Oneiric archives.
<george_e> ...followed by a prompt reboot.
#juju 2012-02-26
<george_e> ...ah - it works now.
<george_e> So I guess the lesson becomes - don't use the dev. releases of Juju if you're looking for stability.
#juju 2014-02-17
<cargill> if a charm has a database relation, how should it handle being in relation to multiple databases at once? should it reject the second join? should it ignore it?
<hazmat> cargill, well as a client to a database, each of the db conns would have different relation names
<hazmat> cargill, ie.. mediawiki does this.. it can have multiple mysql relations.. one for read slave and one for db.. it distinguishes the usage based on the relation name (which also means different rel hooks)
<hazmat> cargill, client/require deps are only satisified once per service.
<cargill> hazmat: but if you have a database relation, the user can still join it with multiple charms, right?
<cargill> (database charms)
<hazmat> cargill, right.. the server/provider side can have many instances of the relation
<hazmat> cargill, in terms of distinguishing between those, you can use relation-ids to list the different instances of that named relation on the server
<hazmat> cargill, its not clear what your question/use case is.. could you elaborate?
<cargill> you say provider can have multiple instances of a relation, but the other side cannot?
<cargill> designing a db-relation-joined/departed, I wonder if I have to handle a user setting up a relation to multiple database charms (where the application can only connect to a single database)
<hazmat> cargill, well... technically it can, its just not common (and certains tools like the gui don't support it)
<hazmat> cargill, you mean like they can connect to postgres or mysql?
<hazmat> cargill, maybe this example clarifies http://pastebin.ubuntu.com/6949028/
<hazmat> actually that simplifies it too much.. here's a better example http://pastebin.ubuntu.com/6949032/
<cargill> so again, the question is, if someone tries to do that (add a second db relation, where one is already active), what's the right response?
<cargill> (from the *joined/departed hooks)
<hazmat> cargill, i'd error so it draws attention from admin
<cargill> thanks
<hazmat> cargill, and log an appropriate error msg
<cargill> sure :)
<tomixxx3> hi, is it bad if it says "instance-state: missing" after deploying a charm?
<tomixxx3> agent-state is "started" :-)
<marcoceppi> tomixxx3: is this on local provider?
<tomixxx3> hi marcoceppi
<tomixxx3> what do u mean with "local provider" ?
<marcoceppi> tomixxx3: the instance-state: missing, what provider are you using? Local, amazon, hp cloud, etc
<tomixxx3> openstack
<marcoceppi> tomixxx3: interesting, does it still say missing?
<tomixxx3> yep, btw nodes has internet-access now :-)
<tomixxx3> i had to set "router ip" in maas dashboard to the same ip of the MaaS-Server
<tomixxx3> figured this out with jtv in #maas
<marcoceppi> tomixxx3: ah, good to know
<tomixxx3> marcoceppi: right now, i have deployed a bunch of charms and i'am waiting until the all have "started"
<marcoceppi> tomixxx3: well that means it simply can't figure out if the instance is running or not. missing could mean the instance is gone or it can't get a status
<tomixxx3> marcoceppi: oh no, sounds not good
<tomixxx3> but let's see
<marcoceppi> tomixxx3: could you show me your juju status?
<marcoceppi> tomixxx3: also, in the horizon dashboard do you see instances launched?
<tomixxx3> i mean, i have deployed multiple charms on a single node, because i have not that much nodes
<tomixxx3> with lxc-creat if u remember
<marcoceppi> so, are you using openstack or maas?
<tomixxx3> both ? ^^
<tomixxx3> https://help.ubuntu.com/community/UbuntuCloudInfrastructure
<marcoceppi> tomixxx3: can you pastebin your juju status please
<tomixxx3> one sec
<tomixxx3> http://pastebin.ubuntu.com/6949593
<tomixxx3> as u can see, cloud2.master is still booting
<tomixxx3> (ok i cann see the node is booting ^^)
<marcoceppi> tomixxx3: Okay, so this is on the maas environment
<tomixxx3> however, nova-volume failed
<marcoceppi> instance-state missing is probably a known issue with lxc containers, the agent-start is started and that's all that matters
<tomixxx3> yep
<marcoceppi> cloud2.master probably needs to be power cycled depending on how long ago you commisioned it
<marcoceppi> nova-volume is in error, so try running juju resolved --retry nova-volume/0 see if that helps
<tomixxx3> cloud2.master is installing ubuntu right now
<tomixxx3> i have it in front of me
<marcoceppi> tomixxx3: gotchya
<marcoceppi> tomixxx3: also, could you pastebin the log from nova-volume/0
<tomixxx3> kk
<marcoceppi> it'll be in /var/log/juju/unit-nova-volume-0.log
<marcoceppi> on nova-volume/0
<tomixxx3> i have to login on nova-volume/0 for this i guess?
<marcoceppi> tomixxx3: if you recall, co-locating most all services to LXC /might/ work but isn't recommended. You might need to do some re-jiggering to get it to work
<marcoceppi> tomixxx3: yes, run juju ssh nova-volume/0
<tomixxx3> "re-jiggering" ?
<marcoceppi> tomixxx3: you might have to massage the node a little bit to get it to setup
<tomixxx3> at home, i have two physical nodes lying around, maybe i attach them to the cloud
<marcoceppi> tomixxx3: it might not be needed
<tomixxx3> kk
<marcoceppi> it depends on why nova-volume errored out
<tomixxx3> do u know how i can Strg+A the content of a file openend with vi
<tomixxx3> ?
<marcoceppi> tomixxx3: you can install pastebinit
<marcoceppi> then run cat /var/log/juju/unit-nova-volume-0.log | pastebinit
<marcoceppi> and it'll give you a pastebin url
<tomixxx3> a nice ^^
<tomixxx3> here it is: http://pastebin.ubuntu.com/6949659
<marcoceppi> tomixxx3: okay, so this is the error
<marcoceppi> nova-volume ERROR: /dev/xvdb is not a valid block device
<marcoceppi> nova-volume needs a block device to take over
<marcoceppi> like ceph
<marcoceppi> I don't know if you actually need nova-volume
<marcoceppi> jamespage: do you actually need cinder or nova-volume to deploy openstack?
<jamespage> marcoceppi, you can elect to not have block storage and drop it
<marcoceppi> jamespage: cool, thanks
<jamespage> also nova-volume is < folsom btw
<tomixxx3> btw, all other charms are started now :-)
<marcoceppi> jamespage: right, cinder is recommended for folsom right?
<jamespage> and should not be carried through to 14.04
<jamespage> marcoceppi, that's correct yes
<marcoceppi> jamespage: cool, thanks!
<marcoceppi> tomixxx3: what you can do, for the sake of getting your openstack demo running, is remove nova-volume and continue on with the deployment
<tomixxx3> nova-volume needs its own machine, i guess? (i have read sth like this a few weeks ago, if i remember correct)
<cargill> in tests, when I've changed a condif value, how do I find out when the change has been carried out so that I can test the result?
<marcoceppi> tomixxx3: yeah, though in future deployments you'll want to use cinder instead
<marcoceppi> cargill: are you using amulet?
<cargill> not yet
<marcoceppi> cargill: then there really isn't a way at the moment
<cargill> but can do if it makes things like that possible
<marcoceppi> cargill: well, it's not perfect, but it strives to resolve that problem by monitoring the hook queue for all the services to know when the environment is idle
<marcoceppi> cargill: otherwise you'll just have to put a sleep or something in your test for X seconds you think it takes on average for the config-change to occur
<tomixxx3> marcoceppi: More abstractly, later on, i want to upload sth to my cloud, process sth on my cloud and download sth from my cloud. so, is nova-volume not a kind of cloud-storage which i need?
<tomixxx3> (for now, i will remove nova-volume)
<cargill> marcoceppi: well, a config change can be a change in the deployed version, that means a redownload, there's no telling really, then
<marcoceppi> tomixxx3: you'll probably use an object store, nova-volume is for attaching drives and blocks to your servers
<tomixxx3> kk
<marcoceppi> where as the object store can be used to upload stuff, have your servers process stuff, then place the results there
<marcoceppi> swift is the object store used in OpenStack
<marcoceppi> cargill: exactly
<marcoceppi> cargill: that's why I started amulet, to be able to intercept relation values and validate those values and to know when an environment was idle
<cargill> where's the docs for amulet? can't find it in the juju docs
<tomixxx3> hmm i have executed "juju destroy-service nova-volume" but it does not disappear when i call "juju status"
<marcoceppi> cargill: https://juju.ubuntu.com/docs/tools-amulet.html
<marcoceppi> tomixxx3: because it's in an error state
<marcoceppi> tomixxx3: just keep runnin juju resolved nova-volume/0
<tomixxx3> kk
<tomixxx3> if i do "juju add-relation nova-compute rabbitmq-server" i get ambiguoos relation
<marcoceppi> tomixxx3: what's the ambiguous relation output?
<tomixxx3> http://pastebin.ubunut.com/6949742
<tomixxx3> sorry
<marcoceppi> no worries
<tomixxx3> http://pastebin.ubuntu.com/6949742
<marcoceppi> tomixxx3: nova-compute:amqp rabbitmq-server:amqp
<marcoceppi> tomixxx3: use `juju add-relation nova-compute:amqp rabbitmq-server:amqp`
<tomixxx3> kk
<tomixxx3> ok, all relations added
<tomixxx3> (except those with nova-volume)
<tomixxx3> now, i should point to http://node-address/horizon
<tomixxx3> i got an "Internal Server Error" when calling 10.0.0.109/horizon
<marcoceppi> tomixxx3: you may have to wait for a few mins
<tomixxx3> kk
<tomixxx3> this leightweight-container thing is quite interesting, they have their own ips ^^
<cargill> marcoceppi: amulet is awesome, it actually allows one to look into the service unit and see whether things are ok or not
<tomixxx3> latest juju state: http://pastebin.ubuntu.com/6949780
<marcoceppi> cargill: glad you think so, there are still a few bugs being worked out with how subordinates function, but it's coming along quite nicely
<cargill> where anything else would be a lot of boilerplate around ssh duplicated between charms
<marcoceppi> tomixxx3: is the dashboarding working now?
<tomixxx3> marcoceppi: no, not yet. do i have to expose some charms?
<tomixxx3> according to guide: 5. Expose the services you want (optional)
<tomixxx3> but i have maas, not?
<tomixxx3> guide: https://help.ubuntu.com/community/UbuntuCloudInfrastructure#Install_Juju
<marcoceppi> tomixxx3: maas has no firewaller, so it doesn't matter
<tomixxx3> ok, btw, http://10.0.0.109 works
<tomixxx3> and it says, it has no content yet
<marcoceppi> tomixxx3: what version of openstack did you deploy? folsom? grizzly?
<tomixxx3> dunno
<marcoceppi> tomixxx3: what does juju get openstack-dashboard show for openstack-origin?
<tomixxx3> default: true
<marcoceppi> tomixxx3: what does value show?
<tomixxx3> distro
<marcoceppi> okay, so you have folosm, which means you ran into the django bug
<tomixxx3> ok, is this a bad bug?
<marcoceppi> tomixxx3: well it prevents the dashboard from working
<marcoceppi> which is kind of annoying
<tomixxx3> k, is there a way to fix this or can i deploy another openstack version?
<tomixxx3> i want a dashboard, i have seen already the dashboard on the usb-all-in-one-node-cloud-demo and it looked nice :-)
<tomixxx3> gives me the feeling everything works as it should
<xp1990> Hi! is anyone here available to help me with a problem?.
<xp1990> I'm using juju 1.16.6
<xp1990> and the I'm getting the old index file contains no data for cloud, error.
<xp1990> I have generated imagemetadata.json and index.json
<xp1990> and uploaded them, using swift, to my cloud public bucket
<xp1990> which is named juju-<hash>/streams/v1/
<xp1990> then the two json files are there
<xp1990> yet I still get an error when running juju bootstrap
<xp1990> any ideas?
<tomixxx3> horizon is folsom, right?
<tomixxx3> is this a possible fix to the dashbaord error: https://lists.launchpad.net/openstack/msg17255.html
<tomixxx3> marcoceppi: ok, i have to go now! however, today we made good progress :-) ty for all your help so far!
<marcoceppi> tomixxx3: np, I'll look for a patch for your django issue
<tomixxx3> marcoceppi: kk, ty!
<marcoceppi> xp1990: can you run juju bootstrap --show-log -debug and pastebin the output?
<jamespage> marcoceppi, the dashboard is hosed with juju deployment prior to havana
<jamespage> marcoceppi, cloud-tools contains a new version f django
<marcoceppi> jamespage: yeah, I remember, this is just because the cloud archive has a more recent version of django, right?
<jamespage> it should be fixed soon - I think its commited in juju-core
<jamespage> marcoceppi, yeah - you got it
<marcoceppi> there should be a way to lower priority remove and reinstall django though, right?
<roadmr> marcoceppi: I just juju ssh'd into the node and removed django 1.5. 1.3 mostly works, though it also bombs on a few pages :/
<marcoceppi> roadmr: bummer, I guess it's best to just use havana if possible
<roadmr> marcoceppi: that'd be ideal! I'm lazy and I just juju deployed openstack-dashboard. Is there a way to point juju to charms that use havana?
<marcoceppi> roadmr: yeah, so you'll have to change the openstack-origin to havana for each charm, but that should trigger an upgrade
<roadmr> marcoceppi: oh cool! so it will just upgrade my existing charms/services? (if it destroys stuff that's OK, I don't have anything important there yet)
<marcoceppi> roadmr: well, something like openstack-origin: cloud:precise-havana/updates
<marcoceppi> roadmr: but yeah, it'll just upgarde the services and it shouldn't break anything or lose anything in the process
<roadmr> marcoceppi: awesome! I'll give it a try, thanks!
<jamespage> roadmr, it will upgrade yes - but openstack upstream only officially supports serial release upgrades
<jamespage> so you need to step
<jamespage> cloud:precise-grizzly
<jamespage> cloud:precise-havana
<jamespage> some things might double jump
<jamespage> its an area the server team is doing some work on for icehouse
<roadmr> jamespage: oh ok, I'll keep that in mind
<med_> marcoceppi, et al: Is there a way with the local provider to pass in an lxc bind mount (or a way to edit the bindmounts and restart the container?)
 * med_ needs to attach a larger drive in an lxc/local-provider
<med_> and/or is sherpa (ssh provider) now available?
<marcoceppi> med_: not with lxc/local
<marcoceppi> med_: but manual provider (previously ssh/sherpa/null) is now available
<marcoceppi> recommended you use 1.17.2 release for manual provider as it's still relatively new
<med_> marcoceppi, thanks
<med_> nodz.
<med_> marcoceppi, https://juju.ubuntu.com/docs/config-manual.html the right place to start with manual/sherpa?
<marcoceppi> med_: yeah, except it's not called null anymore
<med_> looks good to me.
<med_> thanks marcoceppi, giving it a whirl.
 * marcoceppi files a bug to fix docs
 * JoshStrobl asks marcoceppi for a link to the bug so he can track it.
<marcoceppi> JoshStrobl: which one?
<JoshStrobl> marcoceppi: all of them! :P Well, any that are specific to fixes / improvements to Juju documentation, particularly if there are anything regarding improving documentation for local environments, promoting the use of Vagrant, etc.
<JoshStrobl> If there aren't any bugs regarding promoting the use of the Vagrant container, I'd be more than willing to file the bug if you just point me in the right place.
<marcoceppi> JoshStrobl: there's none about that in particular, you can file bugs here: https://bugs.launchpad.net/juju-core/+filebug make sure to target the "docs" branch of juju-core
<marcoceppi> JoshStrobl: we're also in the process of migrating the docs to gh, so eventually I think we'll track issues there as well
<JoshStrobl> noted!
<JoshStrobl> Hey marcoceppi, by branch do you mean apply the "docs" tag in the tag section the file bug form in juju-core?
<marcoceppi> JoshStrobl: no, there's a way to target a specific series
<marcoceppi> the docs are a series of juju-core
<JoshStrobl> I see it listed on the right side of https://bugs.launchpad.net/juju-core/docs/+bugs as a "Series-targeted bugs" but when you click "docs" and then go to file a bug, still shows the same form with no input area for providing the series. Is there a way to do that post filing the bug?
 * JoshStrobl thinks marcoceppi is probably face-palming right now
<marcoceppi> JoshStrobl: you have to first submit the bug before changing it
<marcoceppi> it's just a limitation of lp bugs
<JoshStrobl> Well, hopefully that'll get resolved in the future. Or maybe I should file a bug (if there isn't one already) for that too :P
<sarnold> launchpad is feeling a touch unloved, 92 critical bugs, 655 high importance bugs, https://bugs.launchpad.net/launchpad/
<JoshStrobl> marcoceppi: https://bugs.launchpad.net/juju-core/+bug/1281345
#juju 2014-02-18
<msx> hello guys, i'm starting with Juju (so far loving it) and while I'm at the configuration stage I want to ask you: how do you usually use Juju, straight from the r00t account (seems somewhat dangerous to me), from a special account setup to that end (like use "juju") or just from the account you usually use to manage your Ubuntu servers?
<sarnold> msx: so far as I've used juju myself, I've just used my own user account; if I had shared administrative tasks with other users, I'd likely take the 'juju' user route and have people ssh in or sudo in ..
<msx> sarnold: that's what i was thinking, thanks a lot sarnold :D
<lazyPower> msx: its common practice to use a jump host when sharing it with a team
<lazyPower> there are newer features in the pipeline that will make sharing administrative control of your juju infrastructure easier, but until thats 100% fleshed out and documented, my suggestion would be to use a shared host/account for management.
<msx> lazyPower: hei! Sorry for the late reply. Yes, we are a very small team (just two) so my idea is to have both of us the same level of access to all Juju's features, thus I will be creating a juju account tailored for this end
<msx> lazyPower: thank you very much for your suggestion :)
 * lazyPower hat tips
<lazyPower> any time
<msx> "hat tips" lol
<davecheney> marcoceppi: https://bugs.launchpad.net/bugs/1281394
<_mup_> Bug #1281394: uniter failed to run non-existant config-changed hook <juju-core:New> <https://launchpad.net/bugs/1281394>
<davecheney> ^ config-changed isn't manditory, right ?
<lazyPower> davecheney: none of the hooks are mandatory afaik
<lazyPower> it basically creates an immutable service, but its not required.
<davecheney> lazyPower: thanks for confirming
<davecheney> this is a regression then
<davecheney> deploying the ubuntu charm *used* to work
<davecheney> and that is a charm that doens't get much simpler
<lazyPower> indeed. i'm running the amulet test against 0.17.2 that arosales wrote
<davecheney> lazyPower: what does , juju deploy cs:ubuntu
<davecheney> give you ?
<lazyPower> davecheney: loading 1 sec.
<lazyPower> also, the amulet test passed
<davecheney> well shit
<davecheney> one of us did something wrong
<lazyPower> which version of juju are you running davecheney?
<davecheney> 1.17.3.1
<davecheney> (trunk)
<davecheney> what does juju status ubuntu/0 say
<lazyPower> must be in trunk, 1.17.2.1 isn't complaining about the config-changed hook
<davecheney> 2.1 ?
<davecheney> what is that
<davecheney> are you doing upload tools from a release vesion ?
<lazyPower> http://paste.ubuntu.com/6952698/
<lazyPower> nope
<davecheney> lazyPower: something is very strange then
<davecheney> release tools do not have a 4th digit
<davecheney> oh, you're using the local provider
<davecheney> never mind
<lazyPower> i can check on a cloud provider, 1 moment
<davecheney> please
<davecheney> hp if oyu can
<lazyPower> sure thing
<lazyPower> davecheney: http://paste.ubuntu.com/6952734/
<davecheney> weird
<davecheney> can you have a look in the unit logs on that machine and see what happened
<davecheney> did it try to execute the config-changed hook
<davecheney> did it ignore it
<davecheney> etc
<lazyPower> davecheney: http://paste.ubuntu.com/6952783/
<lazyPower> i see nothing relating to skipping the config-changed hook. let me check the unit-0 log
<davecheney> lazyPower: yeah, that paste is the machine agent
<lazyPower> yeah nothing in the controller node's logs either
<davecheney> wont' be
<davecheney> will only be in machine-1:/var/log/juju/unit-ubuntu-0.log
<lazyPower> that was included in that first paste
<lazyPower> it shoul dhave been appended at the bottom
<lazyPower> i did a cat *.log | pastebinit
<xp1990> Hi! is anyone here available to help me with a problem?.
<xp1990> I'm using juju 1.16.6
<xp1990> and the I'm getting the old index file contains no data for cloud, error.
<xp1990> I have generated imagemetadata.json and index.json
<xp1990> and uploaded them, using swift, to my cloud public bucket
<xp1990> which is named juju-<hash>/streams/v1/
<xp1990> then the two json files are there
<xp1990> yet I still get an error when running juju bootstrap
<xp1990> that says the index file has no data for cloud  {RegionOne <ip>}
<marcoceppi> xp1990: can you pastebin your imagemetadata.json and index.json?
<tomixxx3> marcoceppi: hi, do u know already a bug fix for the "internal server error" of openstack dashboard?
<marcoceppi> tomixxx3: you have to remove django 1.5 and install 1.3, I don't have the exact steps yet
<tomixxx3> marcoceppi: kk
<hazmat> marcoceppi, do you have any amulet tests for mysql?
<marcoceppi> hazmat: just some very light weight skels, nothing of concequence yet
<hazmat> marcoceppi, k, just checking cause of addition of ssl support to interface
<marcoceppi> hazmat: ah, yeah that's fine, I'll make sure a test exists to gen ssl certs and validate those
<hazmat> marcoceppi, so.. i'm taking a slightly different approach.. might be more trouble than its worth.. but its much easier to configure.. ie just set ssl="on"  or ssl="only"and mysql services becomes a ca and hands its ca fingerprint on the relation, with only going to establish client cert passing and checking.
<marcoceppi> hazmat: sounds sexy to me
<marcoceppi> I like making it easier for users
<hazmat> yeah.. i still have to verify its sane for ha setups.
<hazmat> but significantly easier for end users and bundles.
 * marcoceppi nods
<dimitern> hazmat, it seems the fix for bug 1174610 sadly cannot land yet, due to 1.16 having direct state access, and needs to be postponed until the next release
<_mup_> Bug #1174610: unit ids should be unique <regression> <juju-core:Triaged by dimitern> <https://launchpad.net/bugs/1174610>
<hazmat> dimitern, yeah.. saw the bug email.. no worries. glad to see it get some attention
<tomixxx3> marcoceppi: do u think i can investigate the i-net on my own in order to get openstack-dashboard to run? or is it really hard to change django version?
<marcoceppi> tomixxx3: not that hard, just uninstall django
<tomixxx3> so, sudo apt-get purge django1.5 and sudo apt-get install django 1.3?
<roadmr> tomixxx3: yes
<roadmr> tomixxx3: *some* parts of the dashboard will still error out, I got that but haven't looked into it further
<roadmr> tomixxx3: still you should be able to get an overview and get the all-important credentials
<roadmr> (it's mostly some graphs and reports that still fail)
<tomixxx3> ok..
<tomixxx3> do i have to uninstall/restall django only on the openstack-dashboard node?
<tomixxx3> roadmr: do i have to uninstall/reinstall django only on the openstack-dashboard?
<tomixxx3> +node
<roadmr> tomixxx3: yes, only on that one
<roadmr> tomixxx3: juju ssh openstack-dashboard/0 should get you where you need to be
<tomixxx3> roadmr: kk
<jamespage> marcoceppi, OK - I'm going to bite the bullet and write some amulet tests for the openstack charms
<marcoceppi> jamespage: there's a known bug with subordinates that's being patched
<jamespage> I need better automated testing in the charm itself - deployer configs are OK - but its hard to get contributors todo that if they don't know about it
<jamespage> marcoceppi, is there a good example/guide I can follow?
<jamespage> and can I use deployer configs?
<tomixxx3> roadmr: ok i have installed django 1.3 now but i still get the interal server error. do i have to restart apache or sth similar?
<roadmr> tomixxx3: hmm maybe, I don't remember if I did
<tomixxx3> btw, i had to use other commands than sudo apt-get purge or install
<roadmr> try it and let me know if it works :)
<roadmr> tomixxx3: oh really?
<tomixxx3> yes, pip install Django==1.3 to install django for example
<tomixxx3> can i simple reboot the nodes?
<roadmr> yes that should work
<roadmr> tomixxx3: didn't apt-get install python-django=1.3.1-4ubuntu1.8 work? (assuming a 12.04 node)
<tomixxx3> roadmr: dunno i have not tried this, but when i do some i-net investigation, i have read that everyone uses other commands
<roadmr> tomixxx3: well you could also pip install django=1.4 which I think is the correct version for grizzly
<roadmr> tomixxx3: I specifically didn't want to go too deep (at that point I may as well ditch juju and manage the servers manually) so I stuck with what I could do with apt-get
<tomixxx3> roadmr: np, i just investigated google and i have found that other commands so i have tried them :-)
<roadmr> tomixxx3: cool!
<tomixxx3> roadmr: an mathematician told me: "todoay, the only important thing is to know WHERE to search" ^^
<roadmr> tomixxx3: indeed :)
<tomixxx3> ohoh keystone failed. "hook failed: "config-changed"'
<tomixxx3> but openstack dashboard is online :-)
<roadmr> \o/
<tomixxx3> i got the following "Unable to communicate with identity service: [Errno 111] Connection refused. (HTTP 400)" when i try to manually invoke "change-config"
<xp1990> Hi guys, got a bit of an issue, during bootstrap my client will just hang at attempting to connect to
<xp1990> Any idea how I could fix this, or debug it?
<xp1990> http://pastebin.com/fpGTPCUd
<tomixxx3> oh keystone is working now oO
<tomixxx3> i manually executed ./hooks/change-config and i got this http 400 exception but keystone is indicated as "started" when i execute "juju status"
<jamespage> marcoceppi, omg - /usr/bin/python3 /usr/bin/easy_install3 cherrypy
<marcoceppi> jamespage: ugh, I need to stop cherrypy for something else
<marcoceppi> s/stop/drop/
<jamespage> marcoceppi, i object more to 'easy_install;
<jamespage> marcoceppi, currently means I can't run amulet inside our QA cloud
<marcoceppi> jamespage: what, would pip be better? I should actually just have the tarbal in there and use a venv I guess
<jamespage> marcoceppi, archive or contains in charm good for me
<marcoceppi> I'll use something from the archive
<stub> What is the syntax in an amulet test to deploy 'this charm'? It seems to be d.add('mycharm'), but that seems  to be hitting the charm store and failing with a 404 (since the charm isn't in the store yet)
<jamespage> stub, I was just about to ask the same thing
<jamespage> stub, I need a pre-flight test prior to submission or to review a MP
<jamespage> marcoceppi, ^^ is that possible?
<marcoceppi> stub jamespage that's a known issue that's being patched in 1.2.9 - it'll deploy using local, but will attempt to do a remote lookup when running relation grafting
<marcoceppi> so it does a lookup against the charmworld api, quite annoying I know
<stub> that's ok, it means I get to submit my new charm without having to debug these tests ;)
<marcoceppi> stub: technically we don't require tests..yet ;)
<stub> I require tests ;)
<jamespage> marcoceppi, confused
<marcoceppi> jamespage: sorry, when you d.add() charm, if charm is the name of the current charm the test is being run from it'll use that in it's deloy schema. However, later on down the road in the whole amulet stuff, it does a lookup against charmworld (where you're getting the 404) to get relation details. It's failing there
<jamespage> marcoceppi, ok- so it's using the local charm - I'm writing tests for an existing charm so that's ok I think
<marcoceppi> jamespage: yeah, it autodetcts if you're in a charm, when you run d.add('charm') and don't specify a charm to pull from, it'll check if the current charm the test is running from is the charm added and use that path for the deployer file, otherwise it'll expand the url to use the charmstore
<jamespage> marcoceppi, its used bzr to clone the charm into the deployer directory right?
<marcoceppi> jamespage: pretty much, yes
<jamespage> OK - so any changes have to be commited to the branch first
<jamespage> I understand
<marcoceppi> jamespage: actually, I think it does a copydir, I need to verify
<marcoceppi> pretty sure I did that instead to git charms would still work
<jamespage> marcoceppi, looks that way
<jamespage> marcoceppi, no - I can't tell
<marcoceppi> jamespage: actually, it looks like it leaves it up to deployer
<stub> Oh, charm proof wants a copyright now... anyone know what I should be using?
<marcoceppi> stub: an OSI approved license
<jamespage> marcoceppi, what's the assumption on where the things in tests/ get run from? the root?
<jamespage> of the charm?
<marcoceppi> jamespage: CHARM_DIR
<stub> So wack a GPL3 on there since this is on Canonical time
<marcoceppi> stub: sure, a lot of people have been doing the debian style copyright file
<marcoceppi> stub: somethign like this, https://github.com/charms/wordpress/blob/master/copyright
<marcoceppi> but basically, just who owns the copyright, and the license for all the charm files
 * stub copy pastas
<stub> Did you really write that charm in 2012?
<marcoceppi> stub: yes, it's an old ass charm
<marcoceppi> and that was a re-write of the original charm, so it existed even before that
<jamespage> marcoceppi, I've not got this far yet but does the deployer tidy up after itself?
<marcoceppi> jamespage: how so?
<jamespage> i.e. will it destroy the services ready for the next run
<marcoceppi> jamespage: yeah, so each test file gets a fresh bootstrap, unless you set a certain flag
<marcoceppi> jamespage: you can log the output of the test run using the -o flag with juju test, which will copy all the logs from the run to the path in -o
<jamespage> marcoceppi, juju test was the bit of magic I was missing
<marcoceppi> jamespage: oh, yeah, sorry, amulet and juju-test are decoupled. Ones a framework for writing tests the other is the test executor
<jcastro> utlemming, how close to non-beta do you consider the vagrant boxes?
<marcoceppi> utlemming: on a related note, the mounts are broken in the latest images, where should we file bugs?
<jamespage> marcoceppi, ah - so I probably don't want to be installing amulet in tests/01-setup
<marcoceppi> jamespage: you probably /do/ want to be doing that
<jamespage> marcoceppi, yeah - but juju tests bootstraps and environment to run that script
<marcoceppi> jamespage: I'm working on adding isolation to juju-test, so running it will create an LXC that executes the tests without dirtying your system
<marcoceppi> jamespage: oh, yeah, that's another issue. For the time being people have been making the script non-executable, for the sake of time you can just feed the juju-test plugin the tests  you want to run
<marcoceppi> ie: juju tests 02-other-test 03-this-thing
<marcoceppi> to bypass waiting for a setup/teardown to install deps
<jamespage> marcoceppi, ok - nice
<arosales> jcastro: do you know if http://askubuntu.com/questions/134977/juju-stuck-in-pending-state-when-using-lxc is still valid for 1.16.x release?
<marcoceppi> arosales: tjat
<marcoceppi> that's a really old release of juju
<marcoceppi> arosales: as in, that's python juju
<jcastro> I voted to close it
<arosales> jcastro: marcoceppi: ack that is what I was thinking based of the comments and release
<arosales> thanks
<hazmat> and mostly likely its nothing to do with juju but the lxc version of the time if memory serves
<jcastro> yeah, mostly I want it deleted so we can get the google search namespace back
<hazmat> hmm..
<hazmat> actually that quetsion/answer is still revelant
<hazmat> ufw can still interfere with local provider
<hazmat> jcastro, ie.. this is still a real possibility https://bugs.launchpad.net/juju/+bug/998238
<_mup_> Bug #998238: local provider unit agents get stuck in pending state because of host firewall blocking communication <pyjuju:Triaged> <https://launchpad.net/bugs/998238>
<hazmat> its dated but the symptons and solutions are still the same
<jcastro> huh
<jcastro> I wonder if we can put a check in core and at least spit something out?
<hazmat> bugs away
<lazyPower> utlemming: is there a specific launchpad project you'd like me to open bugs against the virtualbox images? I went looking for cloud-images and have found no such project or relevant projects  - the closes was cloud-init.
<lazyPower> *closest
#juju 2014-02-19
<rmiguel> hi friends, I have doubts concerning the nodes maas, the documentation does not make clear what procedure to be performed after the MAAS set the IP SERVER, NODE in the installation, after which it shuts the machine automoticatimente and does not proceed to installation of the NODE .
<rmiguel> Grateful for help
<xp1990> hi guys, got an issue regarding bootstrapping
<xp1990> the system deploys an instance
<xp1990> it sets up fine
<xp1990> but it can't connect
<xp1990> it hangs at attempting to connect to the instance
<xp1990> after 10 minutes it terminates
<xp1990> it seems that maybe there is nothing listening on the API port?
<xp1990> Anyone?
<marcoceppi> xp1990: can you ssh in to the instance? what provider are you using?
<xp1990> I cannot SSH in. but I know ssh is running.
<xp1990> I can ping the created machine, and in my openstack I can see the machine running
<xp1990> the logs all show a correct set up
<xp1990> but the last thing juju reports is Attempting to connect to 10.0.10.5:22
<xp1990> Attempting to connect to 161.74.26.74:22
<marcoceppi> xp1990: without connecting to the machine it's going to be hard to say what's wrong
<xp1990> Yeah
<xp1990> :\
<marcoceppi> xp1990: if you can get to the logs, can you paste the cloud-init log?
<xp1990> I only have what openstack tells me
<xp1990> the cloud-init log finishes successfully.
<xp1990> cloud-init boot finished at Wed, 19 Feb 2014 12:08:57 +0000. Up 31.23 seconds
<xp1990> I have a feeling it could be a networking issue.....
<xp1990> ....but maybe not?
<xp1990> Because the instance is pingable
<Dave-J> Hi Guys, Can anyone tell me, if JuJu is still limited to only booting Ubuntu based guests - I saw this post with patches to support CentOS, but I wondered if that had ever made it back into the trunk?
<Dave-J> http://f.souza.cc/2012/07/using-juju-to-orchestrate-centos-based.html
<tomixxx3> hi, with the default "admin" user, is it ok if i get "INteral Server Error" when i cklick to "Instances & Volumes", "Images & Snapshots" ?
<tomixxx3> in openstack dashboard
<marcoceppi> Dave-J: official support is still Ubuntu, but we're working on officially bridging other operating systems
<marcoceppi> Dave-J: also, that fork of juju is quite old, it's based on juju 0.5 and the latest stable is 1.16.6
<Dave-J> marcoceppi:  Thanks.  Yeah that's why I was wondering if any changes had made their way into the trunk, as I would like to try it out for a proof of concept.
<Dave-J> marcoceppi: any idea on time frame for bridging to other OS ?
<marcoceppi> Dave-J: it's probably right now to do, the only current limitations with other platforms are: You'll have to create your own metadata stream to use images other than Ubuntu, you'll have to make sure that image has cloud-init installed, you'll have to have a charm that isn't tied to Ubuntu (apt, etc)
<marcoceppi> probable*
<marcoceppi> Dave-J: so if you wanted to try, that's the steps I would look in to
<Dave-J> marcoceppi: Thanks - so I'm already using juju on my private openstack deployment, and had to generate imagemetadata.json - so I'm comfortable with that.  I'll try to put a sample charm together and see if it works.  Thanks
<bloodearnest> is there anyway to call juju-log outside a hook context? (without juju run, as am on 1.16)
<mgz> bloodearnest: nope, but yo can just use rsyslog directly I suspect?
<bloodearnest> mgz: yeah, I could. It's from a cron job installed by the charm
<marcoceppi> bloodearnest: mgz well you /can/ but it's quite a lot of work
<danob> hi all
<marcoceppi> danob: hi o/
<danob> how do i run unit-get privet-address/public-address in lxc container?
<danob> i mean where is unit-get command?
<marcoceppi> danob: you can only run run unit-get from a hook, you can't just run it
<danob> marcoceppi: hmm thanks
<lazyPower> danob: you can use debug-hooks environment during hook execution to run the command
<danob> lazyPower: can you point me a doc? is debug-hook is not part of juju and charm-tools package ryt?
<marcoceppi> danob: juju debug-hooks is part of juju core
<marcoceppi> https://juju.ubuntu.com/docs/authors-hook-debug.html
<danob> marcoceppi: thanks
<danob> why juju is written in GO?
<danob> found this [http://dave.cheney.net/wp-content/uploads/2012/08/august-go-meetup.pdf] it answered my question :)
<nxvl> hi
<nxvl> where can i see what a charm is actually installing
<nxvl> i just spin off a juju gui and a mongodb instance, but i'm still a little lost
<nxvl> i'm not sure if the mongodb charm i just deployed can be easily turned into a cluster later on
<nxvl> and i want to deploy my own application behind nginx running over gunicorn and monitored by new-relic, but i'm quite clueless on where to find that information
<marcoceppi> nxvl: mongo will scale properly
<marcoceppi> you can view the charms here
<marcoceppi> http://manage.jujucharms.com/charms/precise/mongodb
<marcoceppi> Clicking on Repository will bring you to the code repo for that charm
<marcoceppi> It also includes the README rendered which should answer a lot of your questions
<nxvl> yup, starting by knowing i know the author :D
<marcoceppi> nxvl: good, if it doesn't work then you know who to blame ;)
<nxvl> marcoceppi: and who to poke an annoy :D
<nxvl> that's one of the awesome things of being a former motu ;)
<danob> how do i test run a charm?
<danob> without creating any unit
<danob> is it possible?
#juju 2014-02-20
<hatch> danob you can use lxc's
<hatch> deploy the charm using the 'local' environment
<danob> hatch: i am using lxc's :)
<hatch> ohh, well then :)
<hatch> so do you just want to test the individual scripts?
<marcoceppi> danob: what's your end goal?
<danob> hatch: to test individual scripts i am reading juju debug-hooks now, will it do?
<hatch> that will allow you to watch the debug logs while the hooks are executing
<hatch> but like marcoceppi asked....what is your end goal? What are you trying to achieve here?
<marcoceppi> that will allow you to use execute commands as if you were that hook*
<danob> marcoceppi: my end gole is to test and debug my charm as if i am just running a python script. i just want to make charm test and debug simple for me.
<hatch> er yes and that too....sorry :)
<marcoceppi> danob: well, we have two ways to do that. One is to simply unit test your hooks. So if you wrote them in Python you'll want to use unittest module to then test your python code as you woudl any python project
<danob> marcoceppi: what is the best workflow for this (test/debug)?
<marcoceppi> danob: the other is to write charm tests, however those are desigend as integration tests, which will actually deploy the charm, and other charms
<marcoceppi> danob: debug-hooks is not what you want, that's something different
<hatch> danob I test mine as individual scripts
<hatch> just fyi
<marcoceppi> What you want is unittests, and that depends entirely on the language of the charm
<danob> marcoceppi: "the other is to write charm tests" like python-django charm?
<marcoceppi> danob: there are quite a few charms that have charm tests
<marcoceppi> danob: python-django is one example
<marcoceppi> danob: memcached is another example, https://bazaar.launchpad.net/~charmers/charms/precise/memcached/trunk/files/head:/tests/
<danob> marcoceppi: hmm thanks :)
<danob> marcoceppi: "One is to simply unit test your hooks" in this way can i run config-get in subprocess.call ?
<danob> marcoceppi: i think not ryt?
<marcoceppi> danob: no, you can't, you would mock calls to config-get and the other hook commands
<marcoceppi> you can't run those commands unless the service is deployed in a juju context
<marcoceppi> Those only exist in the context of a deployed service in a juju environment
<danob> marcoceppi: hmm i understand
<danob> marcoceppi: is it possible to develop a emulator to emulate a juju context/environment, I will be happy to contribute that project :)
<marcoceppi> I have considered it, but it's not a priority at the moment. You're welcome to try
<danob> marcoceppi: hmm, then where I can get full information about 'how juju context/environment created?' or how a charm deployed detail step by step with source code reference? I am not going to start developing by starting a project, just want to know deep inside?
<marcoceppi> danob: I mean, we have docs here and there, have you looked at our documentation? http://juju.ubuntu.com/docs
<danob> marcoceppi: yes.
<marcoceppi> that's pretty much it, if you want to see what the environment looks like, deploy a charm, run juju debug-hooks, then initiage a config change with `juju set <service> key=val`, when the hook gets trapped, you can run config-get, you can type env to see the environment, etc
<danob> marcoceppi: hmm thanks :)
<milk> could anyone tell me how to promote a mysql-slave to master using juju?
<marcoceppi> milk: remove the the slave relation, then recreate the relation with the slave as the master
<milk> marcoceppi: there are two units in the mysql-slave service. i want to promote one of them to master and keep the other as slave.
<marcoceppi> milk: in short, you can't
<milk> hmm....
<marcoceppi> those are the units of a service group, so what you could do was deploy a new mysql service as mysql-master, then scale the slave down, then create the relation
<milk> but the data would be destroyed.
<marcoceppi> milk: stand up the new mysql-master, make it a slave of the mysql-slave, have it sync, break the relation, re-establish as it being master to the slaves, verify data is there, scale down the mysql-slave service group
<marcoceppi> or just leave it where the new "mysql-master" is the slave
<marcoceppi> and scale down the slave group to only have one
<milk> thanks, but it will take too much time for the failover to finish..
<milk> i think this single-master-multiple-slave scenorio is a pretty common one.
<milk> but juju's model seems not to be able to handle it properly...
<milk> :)
<marcoceppi> it handles it, just differently than you'd expect
<marcoceppi> having master - master replication is a better model embodied in juju
<marcoceppi> milk: I've been working on charming this http://www.proxysql.com/ which would make the underlying which is master, which is slave, which do I failover to in juju easier
<milk> marcoceppi: i agree, the multi-master databases(like riak) can fit in juju's model easily.
<milk> marcoceppi: so, with proxysql, we should put both master and slave into a single service, and let the proxy do the rest of the work?
<milk> marcoceppi: and the failover will happen in that service, instead of between two services?( mysql-master and mysql-slave)
<marcoceppi> milk: yes, using the same relation schema, then proxy sql would know how to failover. Charms would use proxy sql as a single point of contact, and it'd be configured to fail over to slave and promte slave as  master directly
<milk> marcoceppi: so the proxysql would be a subordinate service deployed with the mysql server?
<marcoceppi> milk: probably, yes
<marcoceppi> milk: could also be a subordinate deployed on the actual application servers
<marcoceppi> or both
<marcoceppi> proxysql is kind of...flexible in how it can be deployed
<milk> marcoceppi: or as a separate principle service..
<marcoceppi> that as well
<lazyPower> marcoceppi: good news, amulet installs in precise.
<milk> marcoceppi: agree. seems no difference.
<marcoceppi> lazyPower: with the pkg-test ppa?
<lazyPower> I did add that, yes.
<marcoceppi> lazyPower: sorry, apparently I killed the wrong process and display server died, going to have to reboot
<lazyPower> nbd
<lazyPower> do i need to nuke the vm and retry from the juju ppa?
<lazyPower> or is it there yet?
<milk> marcoceppi: thanks for your time :)
<marcoceppi> milk: np! let me know if you have any other questions
<tomixxx3> hi, i when i try to enter "instances & volumes" in openstack dashboard, i only get a "Interal Server Error" message. However, i have not deployed "nova-volume". do i need "nova-volume" for this functionality? the same error comes when i try to open "images & snapshots"
<marcoceppi> tomixxx3: I don't know if that's required for the dashboard to work but it might be
<tomixxx3> marcoceppi: kk, i guess i will figure out this when i actually will USE openstack to deploy my task.
<roadmr> tomixxx3: can you "juju ssh openstack-dashboard/0", then go to /var/log/apache2 and look at errors.log? the error should give a clue as to what's wrong
<roadmr> tomixxx3: (I *think* it's related to the django version mismatch with grizzly but I haven't looked at it further; yes, I have the same problem)
<tomixxx3> apache log says: The request you have made requires authentication. http 401
<bitgandtter> hello good day
<bitgandtter> can anyone make a successfull deployment of juju on rackspace?
<bitgandtter> anyone?
<Ming> Is it a way for juju-log log as ERROR?
<danob> what will be the best practice if I want to download a *.tar.gz file form charm install python script??
<hatch> danob in my latest charm review it was suggested to package it in the charm
<hatch> danob but in the current version this is how my Ghost charm does it https://github.com/hatched/ghost-charm/blob/master/hooks/install
<danob> hatch: thanks man :) is there any size restrictions in charm store?
<danob> hatch: or in juju env?
<hatch> not as far as I know...but if it's excessive it might be rejected in review
<hatch> I just say 'might' because if it was 1GB I would reject it lol
 * hatch is not a reviewer however
<danob> hatch: lol
<hatch> marcoceppi ^^
<rick_h_> danob: you can always do that in a charm you write, but for best results in getting it to users it should be able to work offline
<rick_h_> danob: many charms contain a files or releases directory that contains the downloaded file so that it can work offline
<rick_h_> danob: I think that's policy for reviewed charms going forward.
<danob> rick_h_: hmm
<danob> rick_h_: thanks
<hatch> danob what charm are you writing?
<danob> hatch: i am writing a charm which will deploy apache2 mod and apache2 so i need to download this mod using wget
<danob> hatch: this mod is precompiled binary
<hatch> ahh did you look into the current apache 2 charm to see if you could use it to add your mod? http://manage.jujucharms.com/charms/precise/apache2
<hatch> or maybe enhance it to allow you to add custom mods?
<danob> hatch: no, but i will. i was thinking that i will install apache2 using apt-get. i was wanted my own unit like my-charm-name/0
<hatch> on deploy you can call a service almost whatever you want
<hatch> or you can fork the promoted apache2 charm and make your own modifications to it
<hatch> just throwing ideas out there for ya, take them as you will :)
<maxcan_> is there any way to configure juju to add a unit to a service wehn a service dies?
<danob> hatch: I appreciate ideas :) thanks man
<hatch> :)
<danob> hatch: if i deploy an apache2 charm then how i put may mod.so file in /usr/lib/apache2/modules and other configuration files in apathe2/0 unit?
<danob> hatch: i am confused in here
<danob> hatch: can you point me a charm who does this type of operation
<danob> if i deploy an apache2 charm then how i put may mod.so file in /usr/lib/apache2/modules and other configuration files in apathe2/0 unit?
<danob> i am confused in here
<danob> can you point me a charm who does this type of operation
<lazyPower_> maxcan_: Example of what you're trying to do?
<maxcan_> have resilience against AWS's random killing of ec2 instances
<lazyPower_> maxcan_: ah, juju should be doing that automagically if the environment says a service shoul dhave X units, and on a pulse check it realizes it only has 1, it should be spinning up a replacement unit
<maxcan_> hm, i'll go back and check
<sarnold> lazyPower_: _really_?? cool
<sarnold> lazyPower_: though if the billing department sees the other units are still up, that could get expensive :)
<maxcan_> IIRC, when I killed an ec2 instance, the machine state went to terminated as did the agent-state but no new machines got spun up
<lazyPower_> sarnold: i overheard this prior. Please feel free to correct me if i'm misinformed
<lazyPower_> let me bootstrap and validate that statement, 1 moment maxcan_
<maxcan_> i'm also on on old version
<maxcan_> let me confirm it on my end.. dont want to bother you since i'm not sure
<maxcan_> s/sure/certain
<lazyPower_> well i just regurgitated information i read in chat, so there's no evidence aside from heresay
<lazyPower_> i may have read some late night conversation that's not valid - so i'll check regardless
<lazyPower_> maxcan_: ok i'm seeing the machine terminated status on 1.17.2
<lazyPower_> so the behavior you are seeing is by design, i was misinformed.
<hatch> maxcan_ you are probably looking for something like what https://landscape.canonical.com/ provides
 * timrc considers writing the charm for: https://github.com/robmerrell/hipsterdb
<lazyPower_> timrc: do it!
<timrc> lazyPower_, ;)
<lazyPower> The icon for the service would have to randomly not show up though, because it became too mainstream
#juju 2014-02-21
<davecheney> hey, can anyone help with charm-tools ?
<webbrandon> bangorang!!!!!!!!!!!!!
<davecheney> +1, insightful
<webbrandon> :)
<webbrandon> I thought I remember 13.04 being pushed as the default server launch
<webbrandon> I dont want 12.04
<sarnold> webbrandon: 13.04 has been retired: https://wiki.ubuntu.com/Releases
<webbrandon> how about 13.10
<webbrandon> :)
<webbrandon> kidding
<webbrandon> can't find anything in the code for aws to handle this image-stream configuration setting that sits in the template.  I just wanted to launch a current version.
<webbrandon> sarnold: this is useful :  juju metadata validate-images -s saucy -r us-east-1
<dpb1`> Hi -- I'm trying to bootstrap azure, I keep getting 'The affinity group name is empty or was not specified', my storage-account-name is in West US, my location is West US.  Is there something I'm missing.  I get the same error on Trunk and 1.17 (trusty)
<dpb1`> 1.16 too... must be something I'm doing.
<dpb1`> ya, switching to the east us sorted it.  /me ignores for now
<tomixxx3> hi, can i launsch an "instance" in openstack without the dashbaord? because the dashboard does not work properly, i.e. the page "instances & volumes" ends up in an "interal server error" when clicking on it.
<gnuoy> I'm using the openstack charms to deploy Havana. I'm setting the mtu on the quantum-gateway via dhcp to 1546 but something seems to be reverting it. Can you think of anything that might account for that ?
<jamespage> gnuoy, odd
<jamespage> I can't see anything in the charm that touches the MTU
<jamespage> https://code.launchpad.net/~openstack-charmers/charms/precise/quantum-gateway/trunk
<gnuoy> jamespage, and none of the neutron services are likely to fiddle with it themselves ?
<Lord_Set> So quick question. With Ubuntu 14.04 is Openstack deployment with Juju going to get any better?
<Lord_Set> Or any cool new features being added?
<jamespage> Lord_Set, mailing incremental improvements; ML2 plugin will be the default for neutron
<Lord_Set> Awesome
<Lord_Set> What about Cisco Nexus/Neutron integration via Juju? Well the Openstack/Neutron side of it
<jamespage> Lord_Set, some guys from Cisco have been working on that - they did an announce a few weeks ago
<jamespage> the changes still need to land in the official charms but that is planned
<Lord_Set> Yay great news. I missed that somehow.
<jamespage> mailing == mainly
<jamespage> gnuoy, I don't think so
<Lord_Set> I love my Nexus stuff
<gnuoy> jamespage, ok, thanks for your time. I'm going to redeploy the node in question and see what happens
<jamespage> gnuoy, ok
<Lord_Set> The Nexus API makes VM deployment a lot less of a headache but takes some time to setup and configure properly
<Lord_Set> MAAS and Juju and soon going to dominate Mirantis. Trying to use Mirantis made me want to cry after a bit. Its deployment model is so rigid and to change or customize anything about deployment via Mirantis just plain hurts.
<jamespage> Lord_Set, http://blogs.cisco.com/datacenter/availability-of-the-nexus-1000v-cloud-networking-platform-for-openstack-unveiled-at-cisco-live/
<Lord_Set> Thanks James
<Lord_Set> That is beyond awesome and I cannot wait! Cisco is doing some amazing things with Openstack and virtual machine networking.
<nessita> hello everyone, I was hoping someone could help me debug why a LXC container where I ran juju bootstrap (as per https://juju.ubuntu.com/docs/config-LXC.html) now can not connect to the internet
<gnuoy> jamespage, after redeploying the quantum-gateway I see eth0 has the desired mtu setting. Looking at the other interfaces br0 also has it but the rest don't http://pastebin.ubuntu.com/6970990/ is that ok ?
<jamespage> gnuoy, yup
<gnuoy> jamespage, excellent, thanks
<cjohnston> A few of us just today started seeing "ERROR empty image-stream in environment configuration" when trying to deploy.. has something changed to cause this to happen?
<rick_h__> cjohnston: heh, just hitting up people on this in #juju-dev
<rick_h__> cjohnston: this on ec2?
<rick_h__> natefinch: ^ makes me wonder if something is down on the simple streams side perhaps? or something like that?
<cjohnston> rick_h__: canonistack
<rick_h__> cjohnston: ok
<cjohnston> rick_h__: https://bugs.launchpad.net/juju-core/+bug/1217397  ?
<_mup_> Bug #1217397: image-stream configuration option for openstack provider <config> <openstack-provider> <juju-core:Fix Released by wallyworld> <https://launchpad.net/bugs/1217397>
<rick_h__> cjohnston: yea, it seemed to be azure only but something today is causing issues with it on other providers
<cjohnston> :-(
<rick_h__> cjohnston: what juju version?
<rick_h__> I've got another person testing on precise, I'm on trusty 1.17.2
<cjohnston> the newest upload in t
<cjohnston> 1.17.2-0ubuntu4
<plars> I have 1.17.2-0ubuntu4 and I'm seeing it too
<rick_h__> ok, well at least that bit is common
<rick_h__> natefinch: this is looking like a bigger issue?
<rick_h__> sinzui: ^
<plars> I've been able to bootstrap and deploy just fine earlier today though (with no upgrade in between)
<rick_h__> plars: yea, why I'm wondering if it's something associated with finding images not juju itself
<natefinch> rick_h__: that error message is definitely a juju error message, it's trying to validate your configuration.... but for some reason it's barfing on the image-stream value, even though it's supposed to be fine for it to be unset
<rick_h__> plars: on trusty?
<plars> rick_h__: yes
<rick_h__> natefinch: my teammate got it to run using 1.17.1 on precise
<natefinch> rick_h__: hmmm... I'll bring it up with the rest of the devs.  Seems there's something funky going on with 1.17.2
<rick_h__> natefinch: thanks. Did you want me to file a bug or are you ok from here then?
<natefinch> rick_h__: bugs are always good.  Help prevent things from falling through the cracks
<rick_h__> natefinch: filed https://bugs.launchpad.net/juju-core/+bug/1283246
<_mup_> Bug #1283246: cannot deploy on 1.17.2 on trusty: error on image-stream <juju-core:New> <https://launchpad.net/bugs/1283246>
<rick_h__> plars: cjohnston please add any notes and track &
<cjohnston> thanks rick_h__
<natefinch> rick_h__: thanks.  I'll make sure it gets looked at, ASAP.
<cjohnston> natefinch: could we maybe get a slightly older version thrown in a ppa somewhere?
<natefinch> cjohnston: 1.16 should be what's in ppa:juju/stable
<rick_h__> natefinch: don't think it's built for trusty
<rick_h__> or I lied
<rick_h__> and it is
<rick_h__> https://launchpad.net/~juju/+archive/stable?field.series_filter=trusty
<cjohnston> ack
<cjohnston> 1.16 complains about alot of config fields :-/
<natefinch> unfortunately, forwards compatibility is not something we've spent any time on
<lazyPower> has there been a recent change to the charmstore api?
<rick_h__> lazyPower: how so?
<rick_h__> and by store you mean juju store or charmworld?
<lazyPower> rick_h__: i mean charmworld - my apologies. I keep forgetting there are 2 api's
<rick_h__> lazyPower: what change are you getting. There was a release this week, but trying to recall what api changes might have existed
<lazyPower> i have amulet tests that were working an hour ago - now i'm getting a non-helpful stacktrace bubbling up from charmworldlib. I'm trying to decompose to code to find the endpoint its calling - that way i can rule out a temporary hiccup in charmworldlib and validate its the code
<rick_h__> lazyPower: gotcha, yea nothing changed today in charmworld. I'd have to see more details to help trace. Let me make sure charmworld is reporting healthy
<rick_h__> bah, elasticsearch bit it
<lazyPower> cool so i'm not going crazy then
<mbruzek_> I am also encountering an error with juju lazyPower, and rick_h__
<mbruzek_> rick_h__, I have a traceback that looks like this http://pastebin.ubuntu.com/6973456/
<mbruzek_> I can grab logs if you need them.
<lazyPower> mbruzek_: its because the search provider tanked
<rick_h__> lazyPower: mbruzek_ if it's charmworld I'm working with webops on it atm
<lazyPower> thats the one :) Thanks rick_h__
<webbrandon> Could I launch a t1.micro bootstrap to manage m1 type instances
<lazyPower> webbrandon: thats not recommended at all. since hte controller node runs zookeeper and what not
<lazyPower> you'll run out of memory on the bootstrap node when multiple nodes converge
<sinzui> juju doesn't use zookeeper
<lazyPower> wait, what?
<sinzui> juju has used mongodb for 8 months
<lazyPower> i'm just full of misinformation today
<webbrandon> lazyPower: you sending me down the wrong path huh???
<lazyPower> webbrandon: well, its still not recommended...
<webbrandon> lol
<lazyPower> regardless of my fud
<lazyPower> sinzui: ty for correcting me. I'm thinking of chef-server not juju
<lazyPower> my internal wiring is getting crossed looking at this charm
<sinzui> webbrandon, juju's boostrap node has been run on 512m, but I think 1G is the minimum. Each deployment may require its own sizes just because the services need more storage or ram
<sinzui> webbrandon, I often bootstrap with --constraints "mem=1G" the each deploy has --constrains "mem=2G"
<webbrandon> sinzui: thank you
<webbrandon> sinzui: I am thinking of launching t1 because its free to manage a bitcoin pooling service
<webbrandon> unfortunatly i have to run m class instance just to install the software
<sinzui> webbrandon, alas juju doesn't let you specify an aws/hp instance-type (yet). The constraints only guide the selection of an instance type. I think the the HP rules have issues in fact. 2G on aws gets me a more memory that 2G on Hp
<webbrandon> I understand how to launch the right instance type.
<rick_h__> lazyPower: should be coming back
<lazyPower> rick_h__: awesome. I'll run a test and see if its cleared up
<lazyPower> thank you for driving this rick_h__
<rick_h__> lazyPower: heh, last one around on a friday when it goes boom :)
<rick_h__> lucky me!
<lazyPower> We'll be meeting up in vegas soon enough. I'll buy the first beer :)
<mbruzek_> thanks rick_h__
<mbruzek_> now you can't say it is a thank-less job.
<rick_h__> mbruzek_: hah, good call
#juju 2014-02-22
<rick_h__> cjohnston: natefinch veriried that 1.17.3 allows me to deploy. Had to update the juju/devel ppa to get it
<lazyPower> rick_h__: enjoy your weekend when it gets here. o/  thanks again for the help
<rick_h__> lazyPower: will do, you have fun as well
<cjohnston> thanks rick_h__
<designated> does anyone have a minute to help me with a juju issue?
<designated> I've installed maas and it seems to be working fine, have comissioned one node but when i do a juju status --debug it hangs at opening state; mongo addresses:
<designated> and the node has moved from ready to allocated to user, even though I haven't installed any juju charms on it
<designated> anyone know how to resolve https://bugs.launchpad.net/juju-core/+bug/1178312
<_mup_> Bug #1178312: ERROR state: TLS handshake failed: x509: certificate signed by unknown authority <config> <cts-cloud-review> <ui> <juju-core:Triaged> <https://launchpad.net/bugs/1178312>
<cjohnston> hazmat: ping
<cjohnston> hazmat: when you come around, I just started seeing bug #1269519 again
<_mup_> Bug #1269519: Error on allwatcher api <juju-core:Fix Released by rogpeppe> <juju-deployer:Fix Released> <https://launchpad.net/bugs/1269519>
<hazmat> cjohnston, what of?
<hazmat> cjohnston, nutshell.. was in comment at end
<hazmat> cjohnston, namely juju 1.17+ fixes.. otherwise not resolved in deployer (fix would be series of short conns there instead of one conn per deploy).
<hazmat> but given core already resolved, upgrades are the best fix.. cjohnston curious though what kind of deploy config  is apt to trigger.
<hazmat> cjohnston, if you can attach logs from machine 0 that would be helpful
<hazmat> cjohnston, and your deployer logs as well
<hazmat> i can do the nesc. workaround in deployer.. haven't seen it though.. otoh.. i only really do manual provider atm
<hazmat> afaics rog's fix should resolve it
<hazmat> so curious to see the logs
<cjohnston> ack.. I'll see if I cant get it again
<cjohnston> hazmat: also.. deployer seems broken from the last release
<cjohnston> https://code.launchpad.net/~ev/ubuntu-ci-services-itself/pin-juju-deployer-revno/+merge/207805  has the error
<hazmat> cjohnston, hmm... traceback log?
<cjohnston> I'm not sure why ev went back to 88 instead of just 100 (IIRC)
<cjohnston> but we were using trunk and that started popping up
<hazmat> cjohnston, that's not trunk or last release
<hazmat> cjohnston, the using runtime now also says the name of the env
<cjohnston> hazmat: L27 is where he removed trunk to replace it with 88
<hazmat> cjohnston, which version of juju-core?
<hazmat> cjohnston, the last release has some fixes for juju 1.17+ .. ie. admin-secret auto gen and only in jenv as well as support for juju switch files instead of explicit JUJU_ENV
<cjohnston> hazmat: I don't remember if I got it when I was still on 1.17.x or after I downgraded to 1.16.6
<hazmat> or default in juju env
<hazmat> .yaml
<hazmat> cjohnston, so if you can reproduce with latest deployer on a 1.17.1+ i can take a look... but at the moment your saying unknown version of juju and old version of deployer.
<cjohnston> hazmat: are you going to be around for a bit? if so, I can destroy what I have and do some deployments and debugging.. if not, we can look at it on Monday
<hazmat> cjohnston, i'll be around for an hr or 2.
<cjohnston> ok..
<cjohnston> destroying, then I'll check things out
<hazmat> cjohnston, feel free to ping whenever.. i keep an irc proxy up that generally works :-) if not i'll ping back on monday.
<cjohnston> I just don't want to destroy stuff and then findout you need something I missed :-)
<hazmat> cjohnston, mostly i just  need confirmation of issue with latest versions.. if its the watcher closed thing i'd like logs from machine 0 / state-server and the deployer log.
<cjohnston> ack
<cjohnston> bootstrapping now
<hazmat> cjohnston, specifically.. https://bugs.launchpad.net/juju-deployer/+bug/1269519 should not happen with juju core 1.17.1+ and the other issue/merge you linked is resolved in latest release (and afaics applies primarily to 1.17+ releases of juju-core (no admin-secret in environments.yaml)
<_mup_> Bug #1269519: Error on allwatcher api <juju-core:Fix Released by rogpeppe> <juju-deployer:Fix Released> <https://launchpad.net/bugs/1269519>
<cjohnston> hazmat: ok.. first bit of data
<cjohnston> http://paste.ubuntu.com/6978834/
<cjohnston> Package: juju-core
<cjohnston> New: yes
<cjohnston> State: installed
<cjohnston> Automatically installed: no
<cjohnston> Version: 1.17.3-0ubuntu1
<cjohnston> http://paste.ubuntu.com/6978835/
<cjohnston> What other info do you need for that one?
 * hazmat back
<hazmat> cjohnston, what version of jujuclient is that?
<cjohnston> 0.15-0ubuntu1
<hazmat> cjohnston, is your environment bootstrapped?
<hazmat> cjohnston,  what version of juju-core?
<cjohnston> hazmat: I pasted the juju-core version above.. 1.17.3-0ubuntu1
<hazmat> basically its saying you have no jenv file, and you have no admin-secret.
<hazmat> which afaics means at min your not bootstrapped.
<cjohnston> http://paste.ubuntu.com/6978856/
<cjohnston> hazmat: it seems bootstrapped to me
<hazmat> cjohnston, can.. you run the following
<hazmat> cjohnston, http://pastebin.ubuntu.com/6978876/
<hazmat> cjohnston, you seem to be picking up a different version of deployer.. in rev 101.. the  log line is Using runtime %s on %s
<hazmat> you've just got using runtime.. without the env name, which is suspect to an older version
<hazmat> of deployer
<hazmat> ie you've got a current checkout in /tmp.. but depending on old installations, you might be picking up a different version
<hazmat> cjohnston, hmm
<hazmat> that is odd
<cjohnston> hazmat: what's odd tho is if I used the mp from ev which checks out revno 88, it worked
<hazmat> cjohnston, could you paste bin $ env | grep JUJU
<cjohnston> env | grep JUJU returns nothing
<hazmat> cjohnston, cool.. can you pastebin ls of ~/.juju
<cjohnston> environments  environments.yaml  ssh
<hazmat> cjohnston, so i'm going to commit revno 102
<hazmat> cjohnston, ic an issue on the traceback which should resolve
<cjohnston> ok, cool
<hazmat> pushed
<cjohnston> testing
<hazmat> cjohnston, still in forensics mode.. the commit was mostly just correct the traceback
<hazmat> cjohnston, is
<cjohnston> hazmat: http://paste.ubuntu.com/6978940/
<hazmat>  lcy01 your default environment in environments.yaml
<cjohnston> hazmat: it is
<hazmat> cjohnston, ok.. i'll make an egg on face release 0.3.4 which fixes.. you can work around via JUJU_ENV="lyc01" or juju switch lyc01
<hazmat> cjohnston, pushed fix
<hazmat> revno 102
<hazmat> er. 104
<hazmat> er. 103
<cjohnston> lol
 * hazmat gives up and goes back to the bottle
<hazmat> cjohnston, that should resolve .. i'll wait around for confirmation
<cjohnston> not yet! I'm not done with you!
<cjohnston> lol
<hazmat> cjohnston, so no juju switch or JUJU_ENV needed with revno 103..
<cjohnston> hazmat: question for you... I haven't investigated much with this yet... when using core 1.17+ to deploy our stuff, I have an issue... we deploy the python-django charm.. for some reason, django stays on 1.3.1 (the precise version) which causes an error when running the postgres changed hook. If I downgrade to core 1.16.6, python-django is the correct version and works.. have you seen/heard of anything like
<cjohnston> that
<cjohnston> when deployed with 1.17+ if I juju ssh into the instance, and do apt-cache policy python-django I do see 1.5.x available, but 1.3.1 is what's installed
<hazmat> cjohnston, on precise.. juju need cloud archive tools pocket for mongodb.. on 1.16 it installed mongodb from cloud archive.. 1.17 it added cloud-archive but only installed mongodb explicitly from it.
<hazmat> and cloud-archive... happens to hold openstack.. of which horizon/dashboard uses djanog
<cjohnston> $%^& timeouts
<hazmat> ie. 1.17 tries to be explicit about just mongo from cloud archive.. 1.16 picked up django from there.
<cjohnston> gotcha
<cjohnston> is there a good workaround for that?
<hazmat> cjohnston, you mean you need django 1.5 ?
<cjohnston> yes
<hazmat> cjohnston, i call it virtualenv ... with offline src install from charm cached src... but ignoring that for the moment.. ppa for django?
<cjohnston> we can do that, I just wasn't sure if that's the best way
<hazmat> cjohnston, for py apps.. i'm a bit biased.. i tend to go venv
<hazmat> biased being a py dev
<cjohnston> we are wanting to not do venv's
<hazmat> cjohnston, re ppa.. requires testing to verify on conflicts
<hazmat> cjohnston, spotify did a nice debian packaging helper based on venv
<hazmat> https://github.com/spotify/dh-virtualenv
<hazmat> cjohnston, based on your src paths for deployer your already have way down the rd to venv
<hazmat> trusty pkgs and juju stable ppa  pkgs coming
<hazmat> for 0.3.4
<cjohnston> sweet
<cjohnston> hazmat: is it possible to get those for saucy and precise as well?
<hazmat> cjohnston, hence the juju stable ppa
<cjohnston> ahh
<cjohnston> wow.. juju/stable has old stuff
<hazmat> cjohnston, so things working?
<cjohnston> :-)
<cjohnston> looks like it
<hazmat> cool
<cjohnston> I just got my pgsql-relation-joined error
 * hazmat uploads 0.3.4
<hazmat> to pypi
<cjohnston> thanks for your help hazmat
<hazmat> cjohnston, np. thanks for the feedback. have a good one.
<cjohnston> you too!
#juju 2014-02-23
<cjohnston> hazmat: doubting your still around, but I just got bug #1269519 again
<_mup_> Bug #1269519: Error on allwatcher api <juju-core:Fix Released by rogpeppe> <juju-deployer:Fix Released> <https://launchpad.net/bugs/1269519>
<cjohnston> is there a way of knowing which machine log is needed
<hazmat> cjohnston, traceback from deployer run .... ideally with -vWd
<cjohnston> hazmat: http://paste.ubuntu.com/6979251/
<cjohnston> doesn't have -vWd tho
<hazmat> cjohnston, juju ssh 0
<hazmat> log is in /var/log/juju/ machine-0.log afaicr
<cjohnston> sounds right
<cjohnston> hazmat: http://paste.ubuntu.com/6979254/ and all: http://paste.ubuntu.com/6979255/
<hazmat> cjohnston, interesting.. thanks. i'm done for the night but thats helpful.. i'll talk to rog re the errors there but it sounds like given the synchronous py api..  short term env connections are the to go... will review and give feedback tmorrow
<hazmat> seems like i/o timeout is  the eventual err from masking the previous ping timeout.. tbd..
<cjohnston> thanks
<hazmat> cjohnston, hmm.. actually the log level on these pastebins is missing the api level.. if you have a chance to do run it again.. can you set JUJU_LOGGING_LEVEL="<root>=DEBUG"
<hazmat> or just on the enviornment via juju set logging-config="<root>=DEBUG"
<hazmat> cjohnston, whats the instance size on your state server in these envs
<cjohnston> hazmat: for juju set it wants a service name
<hazmat> cjohnston, juju set-env
<cjohnston> ack
<rick_h> ccccccbtujivtdenulrrvgtjjlcnnjrledtruujfgnhr
<cjohnston> agreed
<rick_h> :)
<cjohnston> hazmat: http://paste.ubuntu.com/6982593/ and http://paste.ubuntu.com/6982594/
<hazmat> cjohnston, you reproduced the state watcher gone aaay.. i think the logs might be too big for pastebin
<hazmat> cjohnston, er.. did you reproduce the state watcher gone away?
<cjohnston> I don't think so now that I look closer, but I do see:  WARNING discarding API open error: read tcp 127.0.0.1:37017: i/o timeout
<cjohnston> ERROR connection is shut down
<hazmat> cjohnston, what's the load on the state server machine like..
<cjohnston> not sure
<cjohnston> this 5 minute timeout sucks
<cjohnston> the highest load I've seen so far has been 2
<cjohnston> avg
<cjohnston> hazmat: I saw it hit almost 3
<Darkmantle> o/
<Darkmantle> Need some help please. I have Ubuntu desktop with Ubuntu Server 12.04 VM running MAAS - juju successfully bootstrapped but status hangs
<Darkmantle> Logs show that it can't connect to mongodb, i've double checked the DNS configuration and its ok
<Darkmantle> ?
<Darkmantle> 2014-02-23 19:35:03 INFO juju.state open.go:68 opening state; mongo addresses: ["localhost:37017"]; entity "" 2014-02-23 19:35:03 DEBUG juju.state open.go:88 connection failed, will retry: dial tcp 127.0.0.1:37017: connection refused
 * Darkmantle redeploys
<Darkmantle> Ok I can't do juju status, it just hangs, any idea why?
<Darkmantle> I've configured the DNS correctly - I get mongodb connection issues on juju node startup but they are all working ok, or so it seems
<Darkmantle> Or it could be a DNS issue, if I try to do mongo name.master:37017 I get no address associated
<Darkmantle> or via IP either
<Darkmantle> ?
<hazmat> Darkmantle, what version of juju?
<Darkmantle> 1.16.6
<Darkmantle> precise-amd64
<hazmat> Darkmantle, can you pastebin  $ juju status --debug
<Darkmantle> sure
<hazmat> Darkmantle, in the maas ui .. do you see the bootstrap/state server allocated?
<Darkmantle> Yes thats all done
<Darkmantle> I can SSH into it and do anything
<Darkmantle> Full network access, full connection to MAAS
<hazmat> k
<hazmat> Darkmantle, re pastebin.. apt-get install pastebinit .. handy cli pastebin client
<Darkmantle> ah thanks
<Darkmantle> Its a re-install so got nooo tools lol
<hazmat> Darkmantle, --upload-tools is a reasonable workaround
<Darkmantle> I did that
<Darkmantle> I did once with once without
<Darkmantle> http://pastebin.com/5i7rtX8U
<Darkmantle> the cloud-init-output log has the normal cant connect to 127.0.0.1:37107 error too
<Darkmantle> even though mongo can connect using mongo localhost:37017/juju
<Darkmantle> Or no wait sorry, it gets init call() failed error
<hazmat> Darkmantle, the mongo you put output for is not the same mongo that juju is running for itself..
<Darkmantle> Surely it can connect?
 * Darkmantle shrugs
<hazmat> Darkmantle,  its a different process.. /etc/init/mongodb.conf vs /etc/init/juju-db.conf
<Darkmantle> ah yeah
<hazmat> Darkmantle, could you pastebin the /var/log/cloud-init-output.log
<hazmat> that's fixed in the dev (>= 1.17) releases.. no extraneous mongo running.
<Darkmantle> http://paste.ubuntu.com/6984186/
<hazmat> fwiw
<hazmat> looks good
<Darkmantle> really?
<Darkmantle> i assume DNS should point to the MAAS node's IP, right? cause it does
<Darkmantle> and then MAAS points to .1 in the vlan
<hazmat> Darkmantle, can your client machine connect to the server? ie.. telnet 97qay.master 37017
<Darkmantle> Yes
<Darkmantle> I checked all the ports
<Darkmantle> ah no it can't by hostname hazmat , only by IP
<Darkmantle> definitely DNS then
<hazmat> sounds like
<Darkmantle> ugh
<hazmat> Darkmantle, you could add maas's dns server to your local resolv.conf maybe
<Darkmantle> i have yeah
<Darkmantle> i couyld update /etc/hosts since its permanent
<Darkmantle> would be easier
<Darkmantle> and fixed
<hazmat> cool
<Darkmantle> quick question, just deployed juju-gui
<Darkmantle> what next? how do i get to it, etc?
<hazmat> Darkmantle, juju status juju-gui ... go to  https://$ip_address
<Darkmantle> ah duh
<Darkmantle> meh, errors
<Darkmantle> how long should machines be pending for? :L
<hazmat> Darkmantle, on maas.. its a bit more dependent on hardware
<Darkmantle> True. I gave the maas VM limited resources
<Darkmantle> I was told it can cope well on 1g 1CPU
<hazmat> Darkmantle, you can run juju debug-log -n 100
<Darkmantle> cannot run instances: gomaasapi: got error back from server: 409 CONFLICT
<hazmat> Darkmantle, or juju ssh 0  && less /var/log/juju/machine-0.log   which should have any provisioning bits
<hazmat> Darkmantle, are there other registered/available machines in maas?
<Darkmantle> ok there is only 1 node atm
<Darkmantle> with juju on it
<Darkmantle> im adding a juju-gui noow
<hazmat> Darkmantle, right.. 409 conflict.. means no additional nodes in maas to hand back to juju
<Darkmantle> yeah
<Darkmantle> i have to add a node first then deploy?
<hazmat> Darkmantle, so there's hulk-smash/manual placement mode.. when deploing..
<hazmat> Darkmantle, basically.. but re manual-placement.. you can place services onto existing machines with deploy --to=0
<hazmat> for example.. where 0 is placeholder for any machine id in the juju env
<Darkmantle> Thats true
<Darkmantle> So I could add one i MAAS
<Darkmantle> then deploy it there?
<Darkmantle> I thought the point was to have machines for each service
<hazmat> Darkmantle,  machines or containers..
<Darkmantle> blah
<Darkmantle> AH
<Darkmantle> confused
<Darkmantle> Ok I get it now
<Darkmantle> I need to make a new node that I can deploy the services too?
<hazmat> thumper-afk, ie. you can also do juju deploy --to=lxc:0  and juju will create an lxc container on machine 0 for the service...  deploy --help for more info on that placement stuff
<hazmat> Darkmantle, yes.
<Darkmantle> Shouldn't it make them automatically?
<hazmat> Darkmantle, how? juju will request new machines for the provider.. but it can't buy new hardware for maas ;-)
<Darkmantle> so i hgave to add the new VM's automatically
<Darkmantle> to host the juju services on
<Darkmantle> thats loooong :P
<Darkmantle> but ok
<hazmat> Darkmantle, ie. if your on ec2.. it will request new instances for services.. on maas.. the machines have to be registered.. there's maas auto-enlist for racks and dcs,  but if your creating vms as maas machines, you'll have to make new ones to have them show up
<Darkmantle> fair enough
<Darkmantle> so make new VM, then add it to maas, then deploy
<Darkmantle> and it should automatically find it
<hazmat> yup
<Darkmantle> I need openstack or something to provision the VM's automatically
<Darkmantle> Totally should've gone that way
<hazmat> Darkmantle, or use local provider
<Darkmantle> Totally stillw ill at some point, MAAS is ok but not as detailed
<Darkmantle> Or that
<Darkmantle> Make my juju have the 12gb RAM / 6 CPU's and do it that way
<hazmat> Darkmantle, maas on vms.. is really just a testing experience.. for charm dev / experiment.. i'd go with local provider.. or a cloud provider
<hazmat> cloud envs are generally pretty cheap for short lived envs
<hazmat> ie 10 machines for an hr ~ $1 usd.
<Darkmantle> mhm
<Darkmantle> hazmat i know its for testing
<Darkmantle> in fact im running my own test web environment
<Darkmantle> thats all
<Darkmantle> blah now PXE borked
<Darkmantle> there
 * Darkmantle yawns
#juju 2015-02-16
<lazyPower> trave: o/ did you figure out your vagrant issue?
<jose> marcoceppi, lazyPower: ping
<tvansteenburgh> jose: lots of US folks on holiday today, anything I can help with?
<jose> tvansteenburgh: there is a meeting scheduled for... now
<jose> I guess I am going to have to cancel that?
<tvansteenburgh> jose: dunno, guess you could wait a bit and see if anyone shows up
<nicopace> Hi guys... i've sent an email to the list, regarding amulet failing tests due to an ssh REMOTE HOST IDENTIFICATION ChANGED. Any idea how can i solve this?
<nicopace> despite i destroy the environment many times per day, it gets happening over and over again (in local environment)
<jose> tvansteenburgh: I'll just cancel, it's an on-air meeting and we're already delayed
<jose> thanks for the heads up
<tvansteenburgh> nicopace: clear the offending entry from your known_hosts file
<tvansteenburgh> jose: np
<jose> nicopace: the error output gives a command to fix it
<jose> just run that and you'll be fine
<nicopace> tvansteenburgh, jose: sure, i've been there in the past (with ssh), but the problem is that it gets repeating again and again because of the tests
<jose> nicopace: which provider?
<nicopace> i may remove the know_hosts file each time i run a test
<nicopace> local
<nicopace> jose: local
<jose> eh, I don't use local. won't be of much help.
<jose> that was happening with HPCloud a couple months ago
<jose> I filed a bug and now it's gone, but don't know how it'll work for Local
<nicopace> jose: there is an open bug now: https://bugs.launchpad.net/juju/+bug/802117
<mup> Bug #802117: juju ssh/scp commands cause spurious key errors <charmers> <ssh> <Amulet:Triaged> <pyjuju:Triaged> <juju-core:Triaged> <juju (Ubuntu):Triaged> <https://launchpad.net/bugs/802117>
<marcoceppi> jose: I just got the calendar invite
<nicopace> it is regarding juju
<nicopace> and as amulet uses juju well...
<marcoceppi> jose: we really suck at planning these meetings
<jose> marcoceppi: it's just 3 of us atm, we won't meet quorum
<jose> meh
<jose> happens
<marcoceppi> nicopace: yeah, I've escalated the bug to core team
<marcoceppi> nicopace: there is a work around. On my phone atm but will reply tonight to you email about it
<nicopace> thanks marcoceppi, but i need to find an alternative fix, as it is blocking me cc:arosales
<marcoceppi> And will update the big with work around
<marcoceppi> Bug*
<nicopace> thanks marcoceppi, i'll apreciate that :)
 * marcoceppi bounces
<nicopace> marcoceppi: hope it is not happening in automate testing also
<stub> What is the current process to get a fresh automated testing run? I got my box turned black end of last week, which makes me think there is still some manual process involved.
<jose> stub: in your own box, bundletester
<stub> jose: Nah, I want the bot for a second opinion
<tvansteenburgh> stub: you can click the box again
<tvansteenburgh> (if you are in ~charmers?)
<jose> I can fire them for you if you want
<stub> tvansteenburgh: yeah, did that. It changed from red to black end of last week
<jose> then it's on the wait queue
<jose> from it it's all automated (afaik)
<stub> jose: oh, ok. Didn't think it would be that long.
<jose> sometimes it takes long, I know :(
<tvansteenburgh> stub: i inadvertently dequeued a bunch, do it again
<tvansteenburgh> there is nothing queued right now
 * stub wrestles with his browser
<stub> tvansteenburgh: done. Black box spun around, remained black. Queued now?
<tvansteenburgh> stub: http://juju-ci.vapour.ws:8080/job/charm-bundle-test/11027/console
<tvansteenburgh> in progress
<stub> ta
<stub> tvansteenburgh: Is this running on a precise host by any chance? I think I'm getting python3.3 instead of python3.4
<tvansteenburgh> stub: no it's trusty
<stub> DEBUG:runner:tests/test_helpers.py:1275:28: F821 undefined name 'ProcessLookupError'
<stub> Unless I'm getting python2 somehow?
<tvansteenburgh> jenkins@charm-bundle-slave:~$ python3
<tvansteenburgh> Python 3.4.0 (default, Apr 11 2014, 13:05:11)
<tvansteenburgh> stub: check shebangs in tests i guess?
<stub> yeah, its getting a python2 interpreter. print(*args, **kw) is a syntax error.
<stub> Which I think means /usr/bin/nosetests is being invoked, rather than using the one in my venv. My makefile sets PATH, but that must be getting overridden. Or nosetests not installed in that venv at all.
<stub> yer, my Makefile runs virtualenv fine, but the following pip commands are not run so none of the pip dependencies get installed. Hmm...
<stub> oh, make -s so maybe they are?
<tvansteenburgh> stub: doh, yeah
<stub> So 'pip install flake8' won't install flake8 if the libraries are already there, and doesn't create the 'flake8' script in the venv's bin. So the system one runs, which is python2 and fails on python3 syntax.
<nicopace> lazyPower: are you arround?
<nicopace> i'm starting the zookeeper tests, and wanted to know if you had any input on that, as it seems you have been the last one pushing on that charm :)
<lazyPower> nicopace: o/
<lazyPower> which zookeeper charm, the hdp-zookeeper?
#juju 2015-02-17
<stub> tvansteenburgh, marcoceppi : I broke your Jenkins
<weblife> Thought someone here may appreciate this: https://mastersproject.info/blog/54de76fb03b198870a6daa12
<trave> anyone around here connect juju into digitalocean, using kapilt/juju-digitalocean ? i feel that i am close, just having trouble with it timing out.
<trave> after setting up a free-tier account on AWS, things work like a charm. thanks anyway :]
<lazyPower> trave: i have successfully use/am-using digital ocean w/ the plugin
<lazyPower> if you're still on that path. the AWS account provider is however one of the more stable/tried/tested providers in the toolbox
<trave> lazyPower, yea, i was likely pretty close with digitalocean, just wasnt picking up my ssh_key identity .pub name or something.. but this AWS path seems pretty stable. I take it the m1.small is about $18/mo.? Just to get my feet wet, how many charms can I throw at one of those instances? I want to toy with IO.js and Redis.
<lazyPower> trave: give or take - Digital Ocean is fairly straight forward once you get the flow - and i've found that often i have to manuallya dd the machines myself since the provisioning of VM's takes longer than their highly touted 60 seconds
<lazyPower> juju add-machine ssh:root@host
<lazyPower> then its in the pool of available machines - however - to answer your aws question - 2 machines (1 for bootstrap, 1 for your apps/containers)
<trave> maybe thats all im missing there, is it doing a timeout when im boostrapping the first instance... now that ive spun up and destroyed a few attempts on AWS, i think i might go back to trying D.O. again
<lazyPower> i'll be around if you want help troubleshooting
<trave> cool, thanks man :]
<lazyPower> their provisioner gets wonky late at night
<lazyPower> maintenance windows and all that
<trave> yea, i might catch some zzzs soon, still got work in the a.m. but feel accomplished having waded my way this far, so far.
<lazyPower> http://blog.dasroot.net/juju-digital-ocean-awesome.html
<lazyPower> i did a fairly decent writeup of the flow to get moving w/ juju on DO
<lazyPower> but that also was submitted for the official docs - so not much to do here other than boost my analytics :)
<trave> Oh yes, I did watch that, which is what got me as far as I did. :D thanks for putting that together, maybe i'll absorb more the second time watching it
<lazyPower> trave: allright - i'll be off on another work station so ping me if you need me for anything.
<trave> lazyPower: probably a dumb question, but heres where i get stuck when spinning up a new digitalocean-0 bootstrap instance... it asks for the root password, even though ive specified an ENV_VAR for DO_SSH_KEY, (do I need to add .pub to the end of that value?) Since I dont yet know what that instances generated root password is, do I let it hang there for a bit while I use the DO UI to go reset that to something else which emails it to me?
<lazyPower> ah, no - you probably need to add your SSH key to DigitalOcean's admin web ui so it loads that into the instance(s)
<trave> Its listed in there, strange.
<lazyPower> http://i.imgur.com/dBakNhu.png
<lazyPower> what version of the plugin do you ahve installed with pip? 0.5.1 assuming?
<trave> yea, ive got it in there, and ive used it successfully before scp'ing it to my other instance by hand before.
<trave> checking..
<trave> juju-docean==0.5.1
<lazyPower> juju version 1.21.1?
<trave> 1.18.4-unknown-amd64
<lazyPower> whoa, thats crusty
<trave> ;]
<lazyPower> sudo add-apt-repository ppa:juju/stable
<trave> im on osx
<lazyPower> ah
<lazyPower> we're shipping 1.18 on mac still? wat
 * lazyPower makes note to follow up on that later today
<lazyPower> it should still work regardless - the manual provider code hasn't changed.
<lazyPower> let me check the bug tracker on hazmat's repo real fast, 1 sec
<trave> i'll try the --devel flag
<lazyPower> ehh
<lazyPower> lets stick with 1.18 for the time being until i can verify
<trave> k :]
<lazyPower> ah
<lazyPower> is the key you're using to authenticate against docean ~/.ssh/id_rsa?
<trave> i named it something else
<trave> i can make an id_rsa
<lazyPower> i bet thats why its having a derp moment
<lazyPower> https://github.com/kapilt/juju-digitalocean/issues/29
<trave> aha, you da man.
<trave> k, making new key, trying again.
<trave> well, it did something different at least: juju_docean.exceptions.ProviderError: Failed to get running instance digitalocean-0 event: {u'status': u'OK', u'event': {u'droplet_id': 4177198, u'percentage': u'90', u'event_type_id': 1, u'id': 43954384, u'action_status': None}}
<trave> ERROR exit status 1
<lazyPower> it timed out - thats in relation to this: https://github.com/kapilt/juju-digitalocean/issues/27
<lazyPower> it gets pokey now and again unfortunately
<trave> k, trying agin :]
<lazyPower> hit paydirt yet trave?
<trave> no, just what keeps seeming like timeouts
<trave> you think the --devel build is in a unstable state?
<lazyPower> i dont know
<lazyPower> its worth a go
<lazyPower> are tyou tracking the vm launch in a browser while you bootstrap?
<trave> or, is 512MB memory/20GB disk the smallest tier that DO does, would spinnging up a smaller instance be faster at all to be under that 3min window?
<trave> yea, i can see it spinning up a digitalocean-0 new instance, the CLI throws the error before its done provisioning that box
<trave> my next feat will be to try out the --devel build, i bet it will work like a charm
<lazyPower> eh
<trave> ;]
<lazyPower> its not a juju problem :( its a DO problem atm
<trave> yea
<lazyPower> the only thing i can suggest is either keep trying until the APi gets un-pokey, or do a pure manual bootstrap
<lazyPower> which is slightly more inconvenient
<lazyPower> but it doesn't have any timing issues, you spin up the machine, specify the bootstrap host in your environments.yaml and you're off and running
<trave> gotcha. yea, i'll try through the day tomorrow, maybe their performance is up during the peak hours
<lazyPower> should be better then yeah.
<lazyPower> its that or try a different DC
<trave> yep, several more avenues to try, this is good for me to broaden my skillset and learn
<trave> thank you so much for the help, youve been great
<lazyPower> I'm sorry it hasn't been as smooth as cheddar tonight however
<trave> i'll be around :]
<lazyPower> if only everything were as reliable :)
<trave> :D
<trave> lazyPower: hey, a simple: brew uninstall juju; brew instal juju; now gives me version: 1.21.1-yosemite-amd64
<lazyPower> \o/
<trave> trying yet again ;]
<trave> woot! bootstrap complete.
<trave> is there any reason the examples show 2g memory for the bootstrap machine? this time I just used a 512m box, that should be fine, right?
<lazyPower> yeah thats fine
<lazyPower> the example was with constraints - and thats why you see the higher mem limit
<trave> again, thanks. :]
<lazyPower> hazmat: thanks for the reply, thats basically what i had figured. Juju isn't doing anything special outside of using the agent and falling back to that specified key
<hazmat> lazyPower: its very badly documented in juju
<hazmat>  search authorized-keys over here https://juju.ubuntu.com/docs/config-general.html
<hazmat> and afaics thats the only doc for it.
<lazyPower> https://github.com/juju/docs/issues/268
<lazyPower> left a TODO to follow up on. cheers
<hazmat> lazyPower: thanks
<hazmat> lazyPower: for the other issue i have a timeout option sitting in my working copy...
 * lazyPower fist pumps
<lazyPower> so we can see that branch land soon, very nice
<lazyPower> i was considering pulling the repo this weekend and working on the v2api support if you haven't already looked into it and getting something shot over for review
<lazyPower> i got sidetracked with my tinfoil hat, i must admit
<lazyPower> hazmat: it may be prudent to close #29 as thats a juju issue and not a plugin issue.
<lazyPower> unless you plan on supporting upload ssh key to DO and setting config to leverage that key
<tvansteenburgh> stub: i killed that jenkins job
<stub> tvansteenburgh: Ta.
<stub> ERROR failed to bootstrap environment: cannot start bootstrap instance: no OS images found for location "West US", series "trusty", architectures ["amd64" "arm64" "armhf" "i386" "ppc64el"] (and endpoint: "https://management.core.windows.net/")
<stub> I think the next one is doomed too
<stub> Still, am interested in how things go with the other providers. I had some code paths not running under lxc that failed with the other providers.
<tvansteenburgh> stub you want me to kill this one too?
<stub> tvansteenburgh: Only if you think a retry will fix that error. 'no OS images found for location' doesn't fill me with confidence of it working today.
<tvansteenburgh> stub: i'll let it run i guess. azure is the environment most often broken. it may work later though
<stub> tvansteenburgh: How many environments are there btw? About 10?
<tvansteenburgh> 5
<tvansteenburgh> lxc, hp, awc, azure, joyent
<stub> I might have told bundletester to run the unittests twice then.
<tvansteenburgh> stub: well let me know if you need stuff killed or whatever, happy to help
<stub> ta. This run looks happy for the next hour in any case.
<tvansteenburgh> stub: also next time you want to start one, let me know, i want to try it on the new jenkins setup i've been working on
<stub> ok
<hazmat> lazyPower: definitely not uploading people's public keys.. to much responsibility.. i'll close it out.
<jcastro> jose, what day do you get to SCALE?
<jose> jcastro: tomorrow 3pm local time
<jose> I'm getting ready to fly out later today
<jcastro> oh excellent, me and marcoceppi as well, we can just sit down and bash out the details of our talk
<jcastro> which isn't until saturday anyway
<jose> yeah, sounds good to me
<lazyPower> marcoceppi: mbruzek, tvansteenburgh - whats our stance on incoming tests that only stand up the service? Enough to satisfy the requirement or do we gently guide them to write an actual functional test?  https://code.launchpad.net/~jose/charms/precise/quassel-core/add-tests/+merge/246001 <- is the test in question
<lazyPower> I'm inclined to say the latter, but its not just my opinion here.
<mbruzek> gently guide
<mbruzek> lazyPower: even if he did a ps aux | grep quassel that would make me happy
<lazyPower> yeah, we have enough weak tests i feel
<lazyPower> +1
<jose> +1 to that, nice tests are easy enough to make
<jose> (I made those in a hurry since Paul wanted the charm approved and had no tests, no results yet)
<mbruzek> ack
<lazyPower> I promise this wasn't a poke @ your effort(s) jose :)
<jose> question, though
<jose> should 'simple' tests be renamed to 10-deploy, or should I keep them as 99-autogen?
<lazyPower> if you're not doing anything to actually validate, the 99-autogen gives me a clear indicator of what's been done
<jose> ok
<marcoceppi> jose: yeah, leave them as 99-autogen, we'll be sniffing that out in the future to find charms that need better tests and it makes it so we can track unmaintained charms
<jose> ok
<stub> tvansteenburgh: Do you recall what constraints the test vms have? I can install the charm on our local OpenStack, but with little RAM the install hook fails due to the OOM killer.
<stub> tvansteenburgh: I don't think I'm going to get anything more useful out of this build, so you can kill it. I might have hung it again too sorry.
<stub> tvansteenburgh: Happy to have it kicked off in your new Jenkins if you want to give it a workout.
<tvansteenburgh> stub: will do
<tvansteenburgh> stub: in the new setup we bootstrap with mem=2G by default
<stub> I'll test that here. I know it fails with 1G, and installs with 8G
<stub> I may be able to tune the requirements down at my end if it is needed.
<tvansteenburgh> stub: http://juju-ci.vapour.ws:8080/computer/charm-bundle-slave/
<stokachu> jcastro: lazyPower do you guys know if juju http proxy settings are exposed in charms?
<stokachu> for things like wget to make use of
<lazyPower> I'm not sure tbh
<jcastro> yeah, that's a good question
<tvansteenburgh> stub: new tests running, the main job kicks off one job per substrate
<stub> tvansteenburgh: Oh, cool. No more interleaving of output.
<lazyPower> tvansteenburgh: is my docker test still in the queue or did it get wiped?
<stub> lazyPower: Its running now
 * lazyPower does a dance
<lazyPower> ta
<stub> lazyPower: oh, it *was* running
<tvansteenburgh> stub: yeah, much better for several reasons. will be the default soonish
<lazyPower> stokachu: i'm reasonably certain that if its set as an env variable it will be, however it would be best to confirm with someone like wwitzel3
<tvansteenburgh> lazyPower: pfft i thought that was stub's job and killed it
<lazyPower> tvansteenburgh: >.>
<jcastro> http://askubuntu.com/questions/430865/juju-http-proxy-and-no-proxy-settings
<tvansteenburgh> lazyPower:  what was the test url
<jcastro> stokachu, ^^^
<jcastro> "The proxy options are exported in all hook execution contexts, and also available in the shell through "juju ssh" or "juju run"."
<lazyPower> tvansteenburgh: i didnt get one, i kicked it off from the revq
<tvansteenburgh> lazyPower: if it's any consolation, it had just started
<stokachu> jcastro: ah nice find!
<tvansteenburgh> lazyPower: no i mean the url of the thing being tested
<lazyPower> https://bugs.launchpad.net/charms/+bug/1413775
<mup> Bug #1413775: New Charm - Docker <Juju Charms Collection:New> <https://launchpad.net/bugs/1413775>
<tvansteenburgh> lazyPower: is this it? lp:~lazypower/charms/trusty/docker/trunk
<lazyPower> yep
<stub> manual deploy to openstack vm with 2G seems ok, no oom
<tvansteenburgh> lazyPower: queued up behind stub's jobs in the new queue
<lazyPower> ta
<stub> @    WARNING: REMOTE HOST IDENTIFICATION HAS CHANGED!     @
<stub> tvansteenburgh: You may need to have the authorized_keys scrubbed between jobs, like http://paste.ubuntu.com/10276641/
<stub> or just nuke both of them entirely I guess
<stub> oh, here it goes...
 * stub watches his job on hp
<tvansteenburgh> stub: yeah good point
<stub> DEBUG:runner:2015-02-17 17:37:21 Error getting env api endpoints, env bootstrapped?
<stub> The lxc job bombed
 * tvansteenburgh looks
<stub> grr... my install hook failed on joyent, but I can't reproduce locally or on our openstack :-/
<tvansteenburgh> stub: no idea what happened on lxc, it's running again here http://juju-ci.vapour.ws:8080/job/charm-bundle-test-lxc/8/console
<stub> ta
<tvansteenburgh> stub: we're capturing all-machines.log at the end of the tests now, that might yield a clue re joyent
<stub> tvansteenburgh: It should, ya.
<stub> I think Cassandra is crashing and we are waiting for the install hook to timeout and fail, rather than slow bootstrapping.
<sh00k> I have what seems  like a pretty dumb question that I'm hoping someone in here might be able to help with me. I already have an environment configured and a machine bootstrapped in EC2. I am now on a different client machine and want to connect to the existing environment/bootstrap. I've already tried migrating the entire 'juju' folder to the other client machine, but I am getting an error back when trying to run 'juju status' of: WARNING discar
<sh00k> 70/environment/d4c68949-2e62-4604-8aa2-xxxxxxxxxxxxx/api"
<sh00k> ERROR Unable to connect to environment "aws".
<sh00k> Please check your credentials or use 'juju bootstrap' to create a new environmen
<sh00k> t.
<sh00k> Error details:
<sh00k> no instances found" any thoughts?
<lazyPower> sh00k: did your $JUJU_HOME/environments/aws.jenv make it during the copy?
<lazyPower> if it did, are you able to ping the ip of the state server?
<sh00k> Yup, i double checked that - as well as the file permissions
<sh00k> stand by let me try that
<sh00k> Damn, so i'm guessing that's it. Ping requests timed out :/
<lazyPower> that would be a key cause, if you rnetwork reachability isn't there.
<lazyPower> its bit me a couple times too - dont feel like the lone stranger sh00k :)
<sh00k> hahah thanks a lot
<marcoceppi> tvansteenburgh: what do I need to change in the review queue for the new testing stuff?
<tvansteenburgh> marcoceppi: new jenkins endpoint, and you'll get one callback per substrate now
<marcoceppi> Docs for this?
<tvansteenburgh> marcoceppi: HAHAHAHAH
<marcoceppi> ROFL
<tvansteenburgh> marcoceppi: i figured we could just bang it out at the sprint
<marcoceppi> tvansteenburgh: I plan to make a bunch of changes this week to revq
<marcoceppi> That works too
<tvansteenburgh> marcoceppi: oh ok, i'll type something up
<tvansteenburgh> sooner would be better i guess
<tvansteenburgh> i'll document it
<marcoceppi> Either way is fine. Does the old style work?
<tvansteenburgh> the old one still works for now
<marcoceppi> Rough docs are fine. Even a bunch of cmd outputs
<tvansteenburgh> i haven't changed it, just built the new thing on the side
<tvansteenburgh> ack re docs
<marcoceppi> Cool
<tvansteenburgh> lunch time
<lazyPower> marcoceppi: the endpoint/json stuff you sent me will largely remain the same yes?
<marcoceppi> lazyPower: yes, if it changes I'll let you know
<marcoceppi> Will also be exposing more data
<lazyPower> nice
<lamont> will juju destroy-env reomve volumes that may have been attached to instances?
<lamont> or simply detach?
<lazyPower> simply detach
<lazyPower> lamont: you're referring to extraneous volumes, such as those that are provided by block-storage-broker correct?
<lazyPower> or additional block devices allocated to the node, not just the baseline disk that comes with the VM?
<lamont> lazyPower: correct
<lamont> specifically one taht our scripts euca-attache-volumes/
<lazyPower> yeah, those are not deleted on machine termination
 * lamont does hisdestroy-env then
<lazyPower> lamont: take a snapshot to be safe :)
<lamont> now you tell me.
<lamont> volume still tehre,arked avaialble
<lamont> woo typing!
 * lazyPower grins
<lazyPower> nothing like a dose of doubt right after you send a command no?
<lazyPower> sorry about that - just happened to think its always better to err on the side of caution
<lamont> yep
<lamont> you were about 8 seconds after I said 'y'
<lazyPower> i wont steer you wrong... on purpose.
<lamont> hehe
<lamont> alls well that ends well
<lazyPower> however with storage landing in core soon - you might want to be a bit more persnickety about snapshotting before destroying things.
<lazyPower> i'm not 100% sure how it will work, i'd need to fish up some planning docs
<lazyPower> tvansteenburgh: did you kick off that test manually?
<lazyPower> i ask because the bug hasn't been updated, and i assume that is a byproduct of how the test was run
<tvansteenburgh> lazyPower: correct
<tvansteenburgh> lazyPower: i expected to see the results here though, not sure why they're not http://reports.vapour.ws/charm-summary/docker
<tvansteenburgh> jog might know
 * jog looks
<travnewmatic> hello all, i know there's a separate maas channel but i'm not getting any response on there
<travnewmatic> can anyone help me with enlisting a node?
<travnewmatic> the node boots from pxe, and ends up at an ubuntu login prompt, but it doesnt appear as a node in my maas contron panel on either the region controller or cluster controller
<travnewmatic> i can ping the pxe'd node
<tvansteenburgh> lazyPower: http://reports.vapour.ws/charm-summary/docker
<marcoceppi> tvansteenburgh: that's awesome dude
<tvansteenburgh> marcoceppi: what's awesome?
<marcoceppi> tvansteenburgh: that URL
<tvansteenburgh> ah, yeah, thank jog :)
<marcoceppi> jog: sweet url
<marcoceppi> awesomeee http://reports.vapour.ws/charm-summary/mysql
<tvansteenburgh> marcoceppi: yeah, mysql has a long history of passing proof and lint!
<marcoceppi> \o/
<tvansteenburgh> lol
<mwak> o/
#juju 2015-02-18
<jam> fwereade_: /wave
<stub> tvansteenburgh: The juju log was invaluable for seeing why my charm was failing - charmhelpers bug triggered by fstab formatting.
<lazyPower> trending!
<tvansteenburgh> stub: excellent!
<lazyPower> mbruzek: lmk when you're on shift, i'd like to go over our strategy for the kubes tests and get cracking on r2g
<mbruzek> lazyPower: Yeah I was just taking care of some administrative stuff.  Ready when you are.
<lazyPower> i'm on the voip channel when you're ready
<whit> tvansteenburgh, does bundletester/deployer make shallow checkouts from git?
<tvansteenburgh> no
<tvansteenburgh> whit: actually, i think deployer does
<tvansteenburgh> whit: bundletester doesn't but it would be a nice add
<whit> tvansteenburgh, ah ok.  our test are running out of space for kubes and we suspect it's the vendored binaries
<whit> tvansteenburgh, a shallow copy might fix that...
<tvansteenburgh> whit: https://github.com/juju-solutions/bundletester/issues/9
<whit> tvansteenburgh, ah cool.  I'll reference that when I make my PR
<tvansteenburgh> whit: running out of space? where?
<whit> tvansteenburgh, on jenkins runs?
<tvansteenburgh> whit: nah, see #eco
<skay> feature request. juju command that makes an ascii art version of the layout of what shows on juju gui :)
<skay> I'm in the middle of writing documentation and wanted a picture of my service layout and made an ascii diagram as a place holder
<skay> though it might get permanent if I can get away with it
<rick_h_> skay: we're working on https://github.com/juju/jujusvg to generate those but right now it needs you to have the layout annotations to work correctly
<skay> rick_h_: oh neat
<skay> that will be very cool
<rick_h_> skay: but the visuals on http://jujucharms.com/mongodb-cluster are generated from that and hopefully we'll get it so it'll auto layout and such so folks can just generate 'gui-like' images from any bundle
<hnix> HI!  please if anyone can give a help on how to deploy a Rails App With juju : i  still suffure from a bug : https://bugs.launchpad.net/charms/+source/rails/+bug/1242669
<mup> Bug #1242669: README instructions for running rake tasks lead to error <papercut> <readme> <rails (Juju Charms Collection):Confirmed for lazypower> <https://launchpad.net/bugs/1242669>
<hnix> i will appriciate , please if there a simplified steps used by any one of you who succefully Deploy a simple Rails App on EC2 instance !
#juju 2015-02-19
<lazyPower> hnix: o/
<lazyPower> there's some interesting behavior at play here with the rails charm. In its current incantation - rbenv is being used as a "wrapped command" - meaning it only exists within teh context of the charm hooks. This has some implications with the current documentation, and usage patterns
<lazyPower> hnix: what i suggest is attempting to do the following: "juju run rake db:migrate" instead of juju ssh - if that fails, there is a rewrite in progress of moving the rails charm from rvm to rbenv
<lazyPower> and i dont have an ETA on when I will be finished with that, however - all contributions/testing are welcome :)
<hnix> LazyPower : Thank u !
<lazyPower> hnix: if you're interested in following along with that migration effort i have a github branch you can watch - https://github.com/chuckbutler/rails-charm/tree/rbenv_migration
<hnix> im a new GoLang Programmer about 2 weeks ago. i want really to contribute on an effort to extend my programming skills .... this is way im playning with this project ( #juju ) so i want learing basics as soon as possible then contribute activly in such as project or docker
<lazyPower> We welcome all the help we can get hnix. Are you looking to learn how juju works then contribute to juju-core?
<hnix> lazypower : i will check this now and see ! i appreciate any advice
<hnix> lazypower : Yeah i want to contribute to juju-core
<jw4> hnix: welcome - have you checked out CONTRIBUTING.md in the juju-core repo?
<thumper> hnix: o/
<thumper> hnix: what sort of contributions are you interested in offering?
<thumper> hnix: what floats your boat?
<hnix> lazypower : yeah i checked it CONTRIBUTING.md + i cloned the git repo ... any taks that i can do i will do it ... fo now i dont know what but, along my learning and Juju discovery i will figure out what type of contribute i can
<lazyPower> hnix: jw4 and thumper are both core contributors. They're helpful in finding bitesized bugs to fix :)
<lazyPower> we have a fairly large landscape in terms of contributions - juju-core is great if you want to sharpen your go, or if you're looking for general contributions we have a large collection (260 +) of charms (services) that are always up for contributions of any kind, between docs to actual orchestration logic contributions
<lazyPower> hnix: if you're main focus is to contribute to core, reading the Contributing doc and joining #juju-dev would be my recommendation. Learn the lay of the land - find your first bitesized bug and dive in.
<jw4> hnix: bit of a firehose, but this is the list of high priority bugs in juju-core https://bugs.launchpad.net/juju-core/+bugs?search=Search&field.importance=High&field.status=New&field.status=Incomplete&field.status=Confirmed&field.status=Triaged&field.status=In+Progress&field.status=Fix+Committed
<hnix> Whoaa! take it easy men ... im a new Golang dev. you scaring me  +_+
<jw4> hnix: once you're able to build and run all the tests you should be ready to start tackling bugs ;)
<jw4> hnix: lol - I think lazyPower may know of some charms that are written in Go (right lazyPower ?) those might be bite sized too
<lazyPower> we dont have charms in go atm
<lazyPower> but it would be an interesting take on charming
<hnix> is this a fast track to dive in a new lang.
<jw4> lazyPower, hnix strike that then
<sarnold> ISTR someone gave that a shot a few months back..
<lazyPower> natefinch i think
<lazyPower> or at least he was talking about it
<lazyPower> my problem with a golang charm is how do you debug a binary when things go sideways?
<jw4> hnix: there is some good go code in juju-core, but it is fairly sophisticated - depends on how big of a bite you want to take I guess
<sarnold> lazyPower: yeah, I wouldn't want to maintain it myself :)
<jw4> lazyPower: +1
<lazyPower> jw4: would be good if we had some small things like localizations, or output corrections.
<lazyPower> tag those as bitsized - as we have a page that links to a zero output bug tracker list
<lazyPower> https://juju.ubuntu.com/resources/easy-tasks-for-new-developers/ <-- the juju-core list is empty
<jw4> lazyPower: yeah - that's a great idea.   Hmm...
<sarnold> this one feels like it's probably approachable by a new dev https://bugs.launchpad.net/juju-core/+bug/1420057
<mup> Bug #1420057: agents see "too many open files" errors after many failed API attempts <juju-core:Triaged> <https://launchpad.net/bugs/1420057>
<hnix> Lazypower : so  trynig out with Charms is easy to beging with ? than the juju-core
<sarnold> strace agents to find out which operations open but don't close file descritprors...
<hnix> it it the case ( advice )
<lazyPower> hnix: charms aren't go-lang specific, but they are a great place to start contributing
<lazyPower> sarnold: you love pain
<sarnold> lazyPower: I figured it wouldn't take much 'internal' knowledge of juju to get started on it
<sarnold> lazyPower: .. and perhaps the agents are easier than the orchestrator
<lazyPower> thats a fair assumption
<hnix> lazypower : hhhhhh we love fast bootstrap
<lazyPower> https://bugs.launchpad.net/juju-core/+bug/1273216
<mup> Bug #1273216: unknown --series to add-machine breaks provisioner <juju-core:Triaged> <https://launchpad.net/bugs/1273216>
<lazyPower> that looks like it would be a good one to add to as well - as you can define a list of we support, and do dict lookup
<sarnold> oh, that one looks easier
<sarnold> juts on a blind guess, anyway :)
<lazyPower> https://bugs.launchpad.net/juju-core/+bugs?field.tag=bitesized
<lazyPower> there's some
<lazyPower> https://bugs.launchpad.net/juju-core/+bug/1320218 - probably being the easiest out of all that i found... yaml lint + error message
<mup> Bug #1320218: can't read environment file no hint of actual yaml error <bitesize> <config> <ui> <juju-core:Triaged> <https://launchpad.net/bugs/1320218>
<hnix> lazypower : i think that i follow https://github.com/chuckbutler/rails-charm/tree/rbenv_migration : 1 week ago and also the steps do not  work for me . it think it a bad start with .
<sarnold> maybe... I can imagine getting a few of those cases would be easy but 'solving' that in full generality might take some work
<lazyPower> hnix: i'm not sure i parsed that - you tried the branch and it failed yes? thats known - its still WIP as there's quite a bit to be done to fully triage a migration from rvm => rbenv
<lazyPower> plus we're going from a localized ruby installation to a system wide rbenv installation that scopes to users so you're not installing system wide
<lazyPower> the idea is to move to a 'bullet proof deployment' model where binstubs are used, and localized ruby copies can be embedded in the application itself
<lazyPower> i see no reason to 'hide ruby' on the system which is what the current charm does, it creates rvm wrappers - which are nearly impossible to consume outside of the charm hook context.
<hnix> the app that im trying to deploy  https://github.com/ziralabs/E-learing-app it's a learning app created to Get  my master Degree in education ... so after that i lunch it public for any one who want to know how to integrate  a REPL build with NodeJS into a Rails APP
<stokachu> tvansteenburgh: do you know if python3 version of jujuclient is making its way into the archive soon?
<lazyPower> stokachu: hazmat's latest release of jujuclient is py2 and py3 compliant
<lazyPower> stokachu: https://github.com/kapilt/python-jujuclient/blob/master/jujuclient.py#L35
<lazyPower> oh you asked if it was getting into the archive - sorry jumped the gun there.
<toyo|work> I cant seem to get juju to bootstrap on version 1.20.11
<toyo|work> Juju cannot bootstrap because no tools are available for your environment.
<gnuoy> jamespage, would you mind takine a look at https://code.launchpad.net/~gnuoy/charm-helpers/1422386/+merge/249837 if/when you have a moment ?
<jamespage> gnuoy, to my unaware brain it looks OK
<stub> gnuoy: Did you fix https://bugs.launchpad.net/charm-helpers/+bug/1423176 with your nrpe landing?
<mup> Bug #1423176: test_nrpe tests broken on trunk <Charm Helpers:New> <https://launchpad.net/bugs/1423176>
<gnuoy> stub, yes, it fixed and I'm hanging my head in shame for not having spotted the unit test break
<Murali> hi jamespage I deployed using lxc like juju deploy --to lxc:0 openstack-dashboard
<Murali> my juju status is pending frol long time
<Murali> am I missing something here
<jamespage> Murali, the first container can take a bit of time on each physical host depending on how fast your internet connection is
<jamespage> it was to pull down the right cloud image and unpack it for use locally
<Murali> ok
<Murali> Hi jamespage
<Murali> even when i deploy keystone and dashboard on different nodes, the login prompts an error saying "authentication failed for admin. please try again ".. been stuck on this since some time
<Murali> Is there any logs which could give me some hints.. No errors found on juju debug logs
<jamespage> Murali, so I'd check the keystone log files (/var/log/keystone) and anything in /var/log/apache (on the dashboard unit)
<Murali> On apache , we get a log saying, login failed for user.. and on keystone logs also we couldnt find any thing
<jamespage> Murali, OK - so lets bypass the dashboard and make sure that you can actually access keystone from the cli
<jamespage> Murali, http://paste.ubuntu.com/10306078/
<jamespage> Murali, please adapt that for your environment - specifically the password
<Murali> ok
<jamespage> Murali, you can then source that file and try a 'keystone catalog' from a command line please
<jamespage> Murali, trying to see if you have a keystone problem or a dashboard problem
<Murali> Ok
<jamespage> Murali, fyi I have a few errands to run in about 15 mins which means you won't get responses from me for about 1.5-2hrs
<jamespage> (not ignoring you :-))
<Murali> thanks for your helps.. we will try out the steps you mentioned...
<jamespage> Murali, I'll ask the question but you have added a relation between keystone and the dashboard right?
<jamespage> Murali, just checking ;-)
<Murali> yes jamespage added the relation
<jamespage> good
<Murali> http://paste.ubuntu.com/10306455/ .. this works fine :P
<valentyn> Hey, i know that it is possible to deploy to a running juju machine, but is it possible to start a "clean" instance (without any service) and to deploy services to that machine afterwards? thanks
<stub> tvansteenburgh: If you are able I'd love another run of lp:~stub/charms/trusty/cassandra/spike on the new Jenkins - I'm hoping all is good now with the 2GB VMs.
<tvansteenburgh> stub: will od
<tvansteenburgh> do
<stub> yeah, please don't OD on me.
<tvansteenburgh> hahah
<tvansteenburgh> stub: running now
<stub> ta muchly
<tvansteenburgh> you bet
<lazyPower> valentyn: juju add-machine will allocate it in the unit pool, and you can juju deploy --to afterwords.
<Crypticus> hello all
<Crypticus> I have an issue with Juju I was hoping someone might have some insight
<Crypticus> quantum-gateway has bee in the "agent-state: installed" for over 12 hours
<Crypticus> I checked the logs and there were no errors.
<Crypticus> Is this normal? for this charm? I didn't want to add the relationships until I was sure this was not broken
<Murali> hi jamespage
<jamespage> Murali, o/
<Murali> http://paste.ubuntu.com/10309259/
<Murali> we got this error while following the https://jujucharms.com/openstack/
<Murali> still installation is going on
<Murali> jamespage all the lxc containers status is in pending
<Murali> but the internet is at good spped
<Murali> speed
<jamespage> Murali, does your hardware match the requirements in the README for that bundle?
<Murali> is there anything we are missing
<jamespage> Murali, I'd drop onto one of the machines and check in /var/log/juju/*
<Murali> yes hardware matches 8 gb ram
<jamespage> Murali, and the network and disk requirements?
<jamespage> Murali, the error you detail above would indicate that maybe your servers don't have two network ports?
<Murali> we have 2 disks
<jamespage> Murali, good - that enables the ceph bits
<Murali> we have 2 nics but name is em1
<jamespage> Murali, ah right
<Murali> not eth0 and eth1
<jamespage> Murali, this is tricky - but not hard to fix - I'd just grab the bundle itself and change it so that the reference to 'eth1' is switch to em1
<jamespage> Murali, you can use juju-deployer to deploy the bundle
<jamespage> Murali, or you can do a juju set neutron-gateway ext-port=em1 now
<jamespage> and then resolve the error
<jamespage> juju resolved --retry neutron-gateway/0
<Murali> ohh that problem is fine
<Murali> But we are seeing the lxc container is in pending state let me send you
<jamespage> Murali, ok
<jamespage> Murali, checkout the logs in /var/log/juju
<Murali> http://paste.ubuntu.com/10309730
<jamespage> and checkout "sudo lxc-ls -f" on any of the physical hosts
<Murali> ok
<Murali> http://paste.ubuntu.com/10309745
<Murali> now we are not seeing any activity on debug logs also
<Murali> jamespage its late night for us. leaving now I will be  off from network
<Murali> thanks a lot for help
<jamespage> Murali, hmm - they are not getting ip address - check out the bridge configuration on the boxes - specifically juju-br0
<Murali> ohh ok
<Murali> so who will allocate IP address for lxc
<jamespage> Murali, well actually MAAS and the lxc containers just boot off standard DHCP on the network
<jamespage> Murali, are you using MAAS DHCP?
<Murali> yes we are using MAAS-DHCP
<jamespage> Murali, are you ok for a few minutes - I think we nearly ahve the root cause
<jamespage> ?
<jamespage> Murali, can you check the bridge configuration - using brctl show
<jamespage> juju-br0 should have all of the lxc container ports listed and the right external port - which for your system is em0 I think
<Murali> sure jamespage
<jamespage> Murali, don't want to keep you from required beer after a long day of hacking :-)
<jamespage> beer/<insert your drink of choice here>
<Murali> :P
<jamespage> hmm beer
<Murali> http://paste.ubuntu.com/10310014
<jamespage> only 4pm my time
<jamespage> Murali, ok this is the problem
<jamespage> Murali, can you do me an ip addr on that box as well
<jamespage> dimitern, ^^ eek
<jamespage> Murali, can you also confirm which juju and maas versions you are using
<jamespage> Murali, juju-br0 is normally wired to eth0
<jamespage> Murali, but it looks like juju is not dealing well with emX naming
<Murali> ok
<jamespage> Murali, that's a healthy one - http://paste.ubuntu.com/10310039/
<Murali> maas version is 1.5.4
<Murali> is there any way to come out of this issue
<Murali> or resolve it
<jamespage> Murali, probably
<jamespage> I'm just pondering it now
<jamespage> Murali, also hunting down the juju dev I know who does networking bits like this
<Murali> thanks jamespage
<Murali> our juju version is 1.21.1
<jamespage> Murali, can you also pastebin ip addr and /etc/network/interfaces please
<dimitern> Murali, jamespage, hey guys
<jamespage> dimitern, Murali is right at the end of day
<dimitern> Murali, can you also paste the contents of the maas lshw XML dump of the machine in question (the one done during commissioning by maas)?
<jamespage> thanks for helping out dimitern
<dimitern> jamespage, np
<dimitern> IIRC there was a bug filed about having emX instead of ethX NICs
<Murali> http://paste.ubuntu.com/10310220/
<Murali> maas lshw and ifconfig are as per the above pated
<Murali> maas is reading eth1 eventhough it is em1 in ifconfig
<dimitern> Murali, I knew something like that might be the issue
<dimitern> Murali, how come that /etc/network/interfaces config is using other names?
<dimitern> Murali, is it something done during the node deployment (by maas) or something done later (after juju started on it - maybe a charm?)
<jamespage> dimitern, this is a bundle deploy of openstack from the charmstore
<jamespage> hey scuttlemonkey
<Murali> http://paste.ubuntu.com/10310282
<dimitern> Murali, yeah, can you paste the contents of /etc/udev/rules.d/* ?
<dimitern> Murali, esp. the persistent net rules
<scuttlemonkey> jamespage: hola senor
<dimitern> Murali, /var/log/cloud-init*.log will be useful to check as well
<jamespage> scuttlemonkey, hows that calamari charm coming along? ;-)
<scuttlemonkey> jamespage: hehe, no idea :)
<scuttlemonkey> I'm drowning in a sea of bureaucracy...my stack doesn't seem to be getting shorter
<jamespage> scuttlemonkey, no worries :-)
<jamespage> beisner, not sure that we're testig on tyan's - those are the dev boards
<Murali> http://paste.ubuntu.com/10310379
<beisner> jamespage, ah good point.
<Murali> in "etc/udev/rules/"  we have only README file exists
<jamespage> dimitern, I think this is a situation that can occur depending on the type of network cards you are using
<Murali> is there any way get rid of this issue jamespage/dimitern
<dimitern> Murali, jamespage, well, if it's the NIC drivers that are causing this, you might want to try disabling them and using generic ones
<jamespage> Murali, to unblock you I'd suggest rewriting the network configuration manually on the physical servers switching eth0 -> em1 and then restart the networking
<jamespage> dimitern, I think this is stock kernel behaviour ;-)
<dimitern> jamespage, oh boy :/
<jamespage> Murali, you may need to nudge the lxc containers manually
<jamespage> lxc-stop --name xxx && lxc-start --name xxx
<jamespage> afterwards
<dimitern> Murali, yes, s/eth1/em1/ in /etc/network/interfaces + reboot should do the trick
<Murali> ok
<dimitern> Murali, first try fixing the host machines, then if the containers haven't come up, try juju retry-provisioning 0/lxc/0 (one by one)
<dimitern> Murali, alternatively :) you could just change the lshw XML dump for these nodes in maas.. or perhaps use a third-party drivers for your hardware, so maas will properly discover the NICs during commissioning
<Murali> Where will be this lshw xml file .. path ??
<jamespage> Murali, can you check "apt-cache policy biosdevname" on those machines please?
<jamespage> Murali, oh wait - I bet that is installed
<jamespage> Murali, dimitern: can you confirm whether your machines where installed with the fast installer? its an option that I think in 1.5.x of MAAS you have to enable
<Murali> http://paste.ubuntu.com/10310654
<jamespage> Murali, dimitern: I suspect a standard d-i based install installs biosdevname which does this remapping magic
<dimitern> jamespage, all 4 of my local maas kvms are using the fast installer
<jamespage> Murali, the option is in the MAAS UI - you can turn fast installer on/off for any machine
<dimitern> jamespage, good catch! this sounds like the likely cause
<jamespage> dimitern, well if it is the cause then yes
<jamespage> dimitern, I checked on our MAAS based openstack installs == no biosdevname
<Murali> we were not using the fastinstaller till now
<jamespage> my local dhcp/dns server installed from usb stick has em1
<jamespage> Murali, switching to the fast installer should fix this problem - but we need a bug for this
<jamespage> I'll raise one now
<dimitern> jamespage, awesome! I'll do a quick test with one of my machines to see if switching off curtin will reproduce it
<Murali> jamespage are you suggesting to use "use the fast installer" or default one
<jamespage> Murali, use the fast installer for all of them
<jamespage> all machines that is
<jamespage> Murali, its an image based install - no biosdevname so it should match the data reported in MAAS which juju uses.
<jamespage> dimitern, https://bugs.launchpad.net/juju-core/+bug/1423626
<mup> Bug #1423626: Inconsistent device naming depending on install method <juju-core:New> <MAAS:New> <maas (Ubuntu):New> <https://launchpad.net/bugs/1423626>
<dimitern> jamespage, ta
<jamespage> dimitern, I pinged the maas guys as well
<dimitern> jamespage, cheers - I'm updating bug 1423372 with this info
<mup> Bug #1423372: juju-br0 is not configured correctly on machines without ethX interfaces <jujud> <network> <juju-core:New> <https://launchpad.net/bugs/1423372>
<Murali> thanks a lot for great help jamespage/dimitern
<Murali> we are now re-bootstraping using the fast installer option
<jamespage> Murali, hopefully that will sort you out
<Murali> ok
<dimitern> Murali, np, glad we could help!
<jamespage> dimitern, how about using the mac address to id the interface - then you don't need to worry about renaming
<jamespage> dimitern, fyi this will get worse under systemd
<jamespage> which I think does all this anyway
<dimitern> jamespage, I've responded in #maas as well - I don't know how to discover the NIC name (which I need to generate /etc/n/i file at the time userdata for cloud-init is generated) from its MAC address
<jamespage> dimitern, ah yes tricky
<jamespage> dimitern, I guess you'd have to inject a script that runs and interrogates the interfaces and then writes the config as part of cloud-init
<dimitern> jamespage, good idea - a bootcmd that does that can work not only for maas
<lazyPower> o/ tvansteenburgh
<lazyPower> can i bother you for a moment regarding what you did yesterday to kick off that passing test? i tried to reproduce it and i got a failure in ci, so i didnt apply the magic tim seasoning
<tvansteenburgh> lp|charm-school: looking now
<tvansteenburgh> lp|charm-school: actually i'm not sure which build it it, send me a link when you have a sec
<tvansteenburgh> s/it it/it is/
<jw4> tvansteenburgh: nice palindrome
<tvansteenburgh> omg
<jw4> hehe
<lazyPower> nice!
<lazyPower> tvansteenburgh: sure, did you needt he passing build or the repository?
<tvansteenburgh> a link to the build that failed
<lazyPower> oh :( that i dont have, let me kick it off again and i'll ship it over. sorry - i didnt think to save it
<tvansteenburgh> surely we can find it
<lazyPower> i think it was build 87
<tvansteenburgh> which job name?
<tvansteenburgh> the new or old one
<lazyPower> nope, thats the pass
<lazyPower> 1 sec
<lazyPower> http://juju-ci.vapour.ws:8080/job/charm-bundle-test-aws/86/console
<tvansteenburgh> lazyPower: ok so that was just before the one that passed
<lazyPower> yeah i cannot find the azure job i kicked off that failed, i just realized that was an aws substrate
<lazyPower> it vanished...
<lazyPower> 88 is a cassandra test
<lazyPower> let me kick another oen off and get fresh results, sorry about this :( i should have had that in hand
<tvansteenburgh> this one? http://juju-ci.vapour.ws:8080/job/charm-bundle-test-azure/41/console
<tvansteenburgh> anyway, feel free to run another one if you want
<jw4> Who are the senior architects of Canonicals Openstack strategy that will be presenting the product roadmap in San Fancisco on March 11th?  The OpenStack Roadshow...
<hazmat> jw4: probably not the best place to ask
<jw4> hazmat: hrm, yeah. (better suggestion?)
<hazmat> via pm
<jw4> yep, tx hazmat
<mbruzek> tvansteenburgh: Can you review https://github.com/juju-solutions/bundletester/pull/11 when you get a chance?
<tvansteenburgh> mbruzek: yup
<mbruzek> tvansteenburgh: thank you, I am going to change the kubernetes tests to look for the BUNDLE environment variable so our tests run successfully.
<tvansteenburgh> mbruzek: merged and jenkins updated
<mbruzek> tvansteenburgh: Thank you
<mbruzek> tvansteenburgh: just on aws?
<tvansteenburgh> mbruzek: no, all
<mbruzek> tvansteenburgh: sweet
<mbruzek> Thank you
<lazyPower> tvansteenburgh: sorry i failed on following up with that job run i got seriously sidetracked. I'll send something over tomorrow morning after i've done my due dilligence
<lazyPower> try and sneak that in before you get moving on your daily tasks
<lazyPower> o/ enjoy your evening
<tvansteenburgh> lazyPower: sure, sounds good , thanks
#juju 2015-02-20
<travnewmatic> not sure if this is the right place to ask
<travnewmatic> but can anyone point me in the direction of deploying centos with maas
<sarnold> travnewmatic: #maas ?
<travnewmatic> i've asked
<sarnold> aha
<travnewmatic> the closest they've got was this thing https://github.com/jjasghar/maas/blob/centos/recipes/centos.rb
<travnewmatic> said it was a chef recipe
<travnewmatic> but i'm not sure how to use it
<hazmat> travnewmatic: requires uploading custom linux image
<travnewmatic> i got that
<travnewmatic> its where do i get that custom linux image thats the question
<toyo|work> for whatever reason juju-gui is stuck pending
<toyo|work> dunno whats up
<Guest85589> hello
<Murali> Hi jamespage
<jamespage> Murali, o/
<Murali> we are able to see the ipaddresses for lxc containers
<Murali> with fastinstaller
<Murali> jamespage all the charms for openstack are installed and showing green in juju-gui. But we are not seeing the relations b/n the charms does it takes some time to come-up
<jamespage> Murali, \o/
<jamespage> Murali, I think the gui will wait for all units to come up and then add the relations
<Murali> http://paste.ubuntu.com/10323487
<Murali> it looks like everything is started
<Murali> or installed
<Murali> http://paste.ubuntu.com/10323517
<Murali> these keeps repeating the debug-logs
<Murali> jamespage its waiting for longtime. do we need trigger something here?
<jamespage> Murali, looking
<jamespage> Murali, I don' think that log message is anything to worry about
<jamespage> Murali, I'm at a bit of a loss as to why no relations
<jamespage> Murali, any errors in the juju-gui?
<Murali> jamespage we saw one notification that error oaccured but no description
<jamespage> Murali, tbh I'm not a juju-gui expert
<jamespage> Murali, you can use juju-deployer to try and add them now
<jamespage> Murali, download the bundle and use "juju-deployer -c bundles.yaml" - it should see that the services are deployed and just add the relations
<Murali> ok will try this
<Murali> jamespage do you need to do juju bootstrap again?
<jamespage> Murali, no
<Murali> http://paste.ubuntu.com/10324052
<Murali> we are seeing this meassage?
<jamespage> huh?
<jamespage> odd
<jamespage> Murali, does your bundles file look like this - http://paste.ubuntu.com/10324139/ ?
<Murali> ours is http://paste.ubuntu.com/10324149
<Murali> basic in top is missing in my file
<jamespage> Murali, I have no idea why it does this but use - https://api.jujucharms.com/v4/bundle/openstack-28/archive/bundles.yaml.orig
<Murali> jamespage after adding "openstack:" in our bundle.yaml file we able to see relations are getting adding
<Murali> top corner of the file
<CompanionCube> Is this the right channel for questions about building juju from source?
<jw4> CompanionCube: probably - what is your question?
<CompanionCube> wait, I think I'm missing some packages
 * CompanionCube forgot to install mercurial
<jw4> CompanionCube: I think the make install-dependencies step will ensure all those dependencies
<CompanionCube> yes, but mercurial is apparently needed to download some of the packages used by go get
<jw4> CompanionCube: yes, I'm pretty sure make install-dependencies will install mercurial, and 'go get github.com/juju/juju' will get that makefile at least
<CompanionCube> ...good point
<lazyPower> CompanionCube: epic handle. o/
<CompanionCube> lazyPower: Been using it on multiple networks for a few years.
<CompanionCube> Freenode is not one of my usually frequented ones however :p
<lazyPower> Welcome to the crowd :) really happy to see new potential juju-core hackers
<CompanionCube> not a hacker, just a bug that's affecting me is at 'Fix Comittted'
<CompanionCube> Wait, damm. Just realised my target-system is an ARM computer, so I'll need to dig out an ARM emulator.
<adamgreig> mbruzek: heya
<lazyPower> CompanionCube: thats typically how it starts right? :)
<mbruzek> adamgreig: Have a nice day, let me know if you have any other questions.
<adamgreig> thanks again, will do
<adamgreig> it certainly all makes a lot more sense to me now :)
<whit> adamgreig, was nice to meet you!
<whit> adamgreig, and thanks for coming to charm school so well prepared :)
<adamgreig> it was good to meet you too, thanks for your time! haha, you're welcome
<adamgreig> wish I'd found the debug tools before I started just sshing into everything ;)
<whit> adamgreig, yeah, they make a huge difference. especially the syncing w/ dhx
<adamgreig> yea that sounds ace
<adamgreig> very familiar with fixing something on a running system and then nuking it and realising I hadn't copied the changes back...
<whit> adamgreig, "the joy of the cloud"
<adamgreig> hehe
<whit> adamgreig, I know you are on a rather tight deadline for the demo, so just to reiterate, do not hesitate to pick our brains
<whit> adamgreig,  if anything does not make sense or is not intuitive.   that's why we are here!
<adamgreig> got it, thanks very much :)
<adamgreig> time to get started on this charm then I guess ;)
<adamgreig> should I mostly be sticking with trusty for new stuff?
<lazyPower> yeah
<lazyPower> unless you have a specific need to target p recise, i would target trusty
<adamgreig> cool
<adamgreig> is there any particularly juju-integrated way to worry about mapping machine IPs to DNS, or do you tend to do that out of band?
<lazyPower> adamgreig: actually - interesting that you ask that
<lazyPower> i have written a bind9 charm to use in dev (not in prod)
<lazyPower> but i have plans on extending this with "proxy service wrappers"
<lazyPower> let me fish up the url for you
<lazyPower> https://github.com/chuckbutler/dns-charm - again, not prod ready (so this may not be useful to you)
<adamgreig> ah neat, so a service could tell bind what its IP is and what it'd like its hostname to be?
<lazyPower> basically you define a blanket domain on this bind provider
<lazyPower> and each service gets its own group/host dns entry, and upon relation-join it updates the resolvconf template to add that dns server
<lazyPower> so you can add the ip of that dns-charm to your workstation and get the perks of not having to know your service ip's
<adamgreig> cool. and then in theory you have that be authoritative for that zone publically and now your services automatically get their IPs publicised?
<lazyPower> correct, its a huge SPOF but as POC ware - thats what its achieved
<adamgreig> got it
<lazyPower> all contributions in forms of bug reports, feature requests, comentary are appreciated
<adamgreig> in theory you could do something very similar only instead of installing bind and updating its local conf, you publish new records to some dns service (AWS's or linode or whatever)?
<lazyPower> (and pr's if you're feeling adventerous)
<lazyPower> exactly i wanted the dns-charm to be a one stop shop for that
<lazyPower> by implementing a provider, you wrap their API and execute the proper motions on behalf of the user in their given provider
<lazyPower> like godaddy, awsrt53, dnsimple - all have api's and were targets for the 1.0 release
<lazyPower> thats all outlined in docs/HACKING.md
<lazyPower> and https://github.com/chuckbutler/DNS-Charm/blob/master/docs/provider.md
<adamgreig> nice
<lazyPower> I'd love to get back on this in the near future, i haven't touched it since mid 2014 unfortunately. but with some new interest - lets plan and make something great :)
<adamgreig> sounds fun! it seems like a pretty neat way to get everything totally set up
<adamgreig> atm vzzual's infrastructure is mostly on rackspace's cloud for various reasons but I imagine their DNS API is amenable (not played with it yet)
<CompanionCube> I'm getting 'storage/volume.go:11: undefined: names.DiskTag' when running 'go install'
<lazyPower> CompanionCube: might want to join #juju-dev - as the core devs are typically paying attention in there.
<CompanionCube> ah, okay
<CompanionCube> I'll be there shortly
<marcoceppi> CompanionCube: you shouldn't need to run go install
<marcoceppi> CompanionCube: just `go get github.com/juju/juju/...`
<marcoceppi> CompanionCube: you may also need to run `JUJU_MAKE_GODEPS=true make godeps`
<CompanionCube> yay works now, got an ARM version of juju built.
<lazyPower> \o/
<marcoceppi> CompanionCube: if you're bootstrapping anything other than local
<marcoceppi> you'll need to bootstrap with --upload-tools when you compile from source
<CompanionCube> marcoceppi: anyway to upload tools like that with an already bootstrapped environment? or do I need to tear-down and redeploy
<marcoceppi> CompanionCube: was the fix you need on the client or agent side?
<marcoceppi> (what was teh bug, btw?)
<CompanionCube> https://bugs.launchpad.net/juju-core/+bug/1420049 - visibly affetcs ppc64el, but I spotted the error on my ARM deployment also
<mup> Bug #1420049: ppc64el - jujud: Syntax error: word unexpected (expecting ")") <deploy> <openstack> <regression> <uosci> <juju-core:Triaged> <juju-core 1.22:Fix Committed by axwalk> <https://launchpad.net/bugs/1420049>
<marcoceppi> CompanionCube: yeah, so you'll need to upgrade juju. You can try a juju upgrade-juju with --upload-tools from the newly compiled juju
<marcoceppi> CompanionCube: but it might be easier to just re-bootstrap
<CompanionCube> thanks
<lazyPower> nicopace: gave you some initial looks over those test plans on the charms i'm familiar with
<CompanionCube> Works. Yay :)
<lazyPower> +1 on those - good strategies. thanks for publishing the bugs.
<Airdawg> Hi ppl. Did something happen with juju-charms? There's only 13 charms/solutions and no bundles.
<Airdawg> The number is slowly increasing though.
<ctlaugh> I'm looking for some help on the correct way to configure the charm(s) to deploy Nova correctly to use either Neutron or FlatDHCPManager (whatever is easier) when deployed by Juju on a MAAS node.  The node only has 1 network interface that is brought up (eth0) and it is already bridged to juju-br0.
<ctlaugh> ^^ The examples I find all involve needing more than one network interface, but MAAS/Juju only bring up 1 of them.
#juju 2015-02-21
<toyo|work> can juju run things like apt-get update && apt-get upgrade on nodes?
<toyo|work> in maas
<toyo|work> guess thats not really a service
<sarnold> toyo|work: a while back I put together an unattended-upgrades subordinate charm..
<sarnold> toyo|work: I don't know if thats "the best" way to keep systems upgraded; landscape enrollment of the units is another option.
<toyo|work> ah
<toyo|work> would be nice if it was built into maas
<toyo|work> :D
<toyo|work> weird, I just tried setting up an apache instance tied to the juju-gui as a balancer and for whatever reason it didnt put a proxypass directive in the apache config so it dont work
<toyo|work> when you juju destroy-machine is it supposed to power it down?
<rick_h_> toyo|work: yea, I think landscape is the 'official' answer to the upgrade/management story there.
<rick_h_> toyo|work: if there's something missing in the juju-gui/apache relation please file a bug and we'll look into it. I think we tend to put things like haproxy in front and not sure we've got a well tested apache relation in that situations
<toyo|work> thanks rick_h_
<toyo|work> rick_h_: I also figured out the answer to mass upgrades
<toyo|work> juju run "sudo apt-get update" --all
#juju 2016-02-22
<apuimedo> jamespage: I got the MEM manager charm working yesterday with ha and all. I'll be adding tests and unit tests and I'll pass it along for review
<apuimedo> I was also checking the reactive layers
<apuimedo> It's quite an interesting concept
<apuimedo> and the interfaces possibly even more
<apuimedo> but by God will it be a piece of work to move the openstack charms to that
<jamespage> apuimedo, a large bit of work to re-factor all 23 existing charms to reactive/layers/interfaces :-)
<apuimedo> yeah
<jamespage> apuimedo, good morning btw
<apuimedo> I saw somebody started working on the keystone interface
<apuimedo> jamespage: good morning indeed ;-)
<jamespage> apuimedo, yah - let me give you a few pointers as to work we've done
<jamespage> apuimedo, https://code.launchpad.net/~openstack-charmers/+git
<apuimedo> I added a Midonet API interface yesterday
<apuimedo> the only thing I don't quite see well is that in most interfaces I don't see any place for unit tests
<apuimedo> so it feels a bit magical
<deanman> Hello, I'm trying to bootstrap an AWS compatible cloud environment (eucalyptus) and can't seem to figure what's the configuration parameter to define the endpoint URL on environments.yam. Any hints?
<mgz> deanman: there isn't one, you'll need to make some source changes
<mgz> deanman: but that's likely less of an issue than getting an image etc working
<mgz> deanman: you're probably better off using the manual provider
<deanman> mgz: There a couple google hits where people discussed about deploying on eucalyptus and some even said they succeeded but couldn't find any reference for the endpoint url while checking the docs. Ok most probably you are right, i should check with manual provider
<tinwood> ls
<tvansteenburgh> I was just reading about Binding Service Endpoints to Spaces in the juju-core 2.0beta1 release notes. Does anyone know how to define an endpoint in a charm? I don't remember seeing anything about that.
<tvansteenburgh> Never mind, this explains it: http://juju-sapphire.github.io/MAAS%20Spaces%20Demo/
<icey> does upgrade_charm kick off config changed? and do defaults (that are left as defaults) update if changed in an upgrade_charm?
<cory_fu> icey: I'm fairly certain that the only hook that runs during a `juju upgrade-charm` is the upgrade-charm hook
<icey> thanks cory_fu, do you know if it will update the config if defaults change (and were still defaults at upgrade time)?
<cory_fu> icey: Oh, you mean if the new version has different defaults?  I don't know, but my guess would be not
<icey> ok, thanks cory_fu!
<cory_fu> But that's just a WAG, so you should really test it.  I'm curious to what the result is if you do
<jamespage> icey, config-changed will always run after upgrade-charm
<icey> jamespage: but do the values (that were still defaults) change to new defaults as well?
<jamespage> yes
<jamespage> juju records if a non-default value was provided
<jamespage> if it's not been changes, the default will change
<icey> awesome, thanks jamespage!
<neiljerram> afternoon all - Can I ask a question about Neutron-related charmhelpers?
<neiljerram> I'm working on charm integration for a networking backend called calico. We've actually had charm support for a while, but now I'm just adding in Liberty support
<jcastro> magicaltrout: ping! You were the guy looking at doing gitlab right?
<pmatulis> anyone use rackspace with juju 2.0 ?
<jose> pmatulis: afaik there's no provider, so I use manual
<pmatulis> yes, the provider exists, i just can't get it to work
<jose> oh... let me double check here
<jose> ah, I'm stuck in 1.26
<neiljerram> how can I clear the charm cache on the bootstrap machine?
<neiljerram> On the bootstrap machine, I see this, and I think it's causing the deploy not to fetch my latest code from launchpad:
<neiljerram> lp:~project-calico/charms/trusty/nova-cloud-controller/liberty
<neiljerram> root@calico-vm07:/var/log/juju# find / -name "*nova-cloud-controller*"
<neiljerram> /var/lib/juju/charm-get-cache/local_3a_trusty_2f_nova-cloud-controller-501.zip
<hatch> I see that api-info is no longer a command in juju 2.0 - can anyone fill me in on what the equivelant command to `juju api-info --password password` is?
<neiljerram> hatch, I'm afraid I doubt it.  I'm mostly seeing questions on this channel, and no answers.
<neiljerram> Still, I wish you more luck than I've had!
<hatch> :) Someone will pop in to answer you I'm sure
<rick_h_> hatch: get-model-config maybe?
<rick_h_> neiljerram: the deploy pulling from LP?
<rick_h_> neiljerram: it comes from the store, if you've updated in LP it needs to get reingested which takes a couple of hours atm
<neiljerram> rick_h_, I understood that it was possible to use a launchpad branch directly, with this syntax in the bundle file:
<neiljerram>     "nova-cloud-controller":
<neiljerram>       branch: "lp:~project-calico/charms/trusty/nova-cloud-controller/liberty"
<rick_h_> neiljerram: juju deploy does't use that, only the juju-deployer tool. what version of juju are you using?
<neiljerram> rick_h_, Also, my last upload to that branch was a couple days ago, so should have had any digestion needed.
<neiljerram> root@calico14:~/md4# juju --version
<neiljerram> 1.25.3-trusty-i386
<rick_h_> neiljerram: k, your branch isn't /trunk so it won't be ingested to the charmstore, but once it is it'll be cs:~project-calico/trusty/nova-cloud-controller
<neiljerram> I was deploying with: juju deployer -v -c liberty.yaml
<neiljerram> rick_h_, Yes, that's what I thought.  I didn't want to create something in the store until I had some confidence that the code was working - hence the launchpad branch.
<rick_h_> neiljerram: k, so to confirm you're using juju-deployer to deploy this?
<rick_h_> neiljerram: oic, you're trying to upgrade the charm?
<nagyz> ok, gonna ask here, since #maas is usually dead: do you guys think I can add arbitrary subnets to MaaS using the CLI? I think the requirement that the maas controller sits on every subnet that it can deploy nodes to is far fetched
<nagyz> unless I have this backwards and it's just an auto-add-subnet function for the ones known already
<nagyz> looking to deploy a greenfield OpenStack using proper public and private subnetting...
<stokachu> whats the account parameter for in the CreateModel api? https://github.com/juju/juju/blob/master/api/modelmanager/modelmanager.go#L56
<neiljerram> rick_h_, yes, exactly
<stokachu> juju help create-model only lists --owner and --config as an option
<rick_h_> neiljerram: ok, and how are you doing the upgrade?
<rick_h_> stokachu: I think that's the work not complete yet
<rick_h_> stokachu: you'll be able to specify a credentials.yaml entry
<stokachu> rick_h_: ok
<stokachu> gotcha
<rick_h_> stokachu: e.g. juju create-model work-ec2-creds
<rick_h_> and not have to pass them via the -c yaml file param
<rick_h_> at least that's what the spec says, not looked into accounts in this code yet :)
<stokachu> nice, yea right now i just use the ConfigSkeleton api call for that
<stokachu> i wonder if they'll have a AccountSkeleton
<stokachu> ok cool np
<rick_h_> stokachu: if you're curious axw and wallyworld are the masters of this stuff
<neiljerram> I took a fork of the upstream nova-cloud-controller, with: bzr branch  lp:~openstack-charmers/charms/trusty/nova-cloud-controller/next
<stokachu> rick_h_: yea im just trying to stay up with the latest api changes for my work with bigdata install
<stokachu> so i can be ready when juju goes live
<neiljerram> Then I added my changes to that, and pushed to lp:~project-calico/charms/trusty/nova-cloud-controller/liberty
<rick_h_> stokachu: gotcha, yea the credentials.yaml file is there, but the api for add-credential and such is in progress as next step for beta2
<stokachu> ok cool, thanks for the info
<rick_h_> neiljerram: k, but what command did you run expecting the charm to upgrade
<neiljerram> rick_h_, nothing yet, I think.
<neiljerram> rick_h_, I'm thinking the process is: 1. try out new code. 2. propose merge of my new code into lp:~openstack-charmers/charms/trusty/nova-cloud-controller/next
<neiljerram> So at the moment I'm only at step (1).
<rick_h_> neiljerram: right, but you were saying that the charm was't grabbing the new code
<rick_h_> neiljerram: so you wanted to clear some caches?
<neiljerram> Yeah, what I see is that on the machine where that charm should go, the deploy fails.  And when I did into the cause of the failure, it's because it doesn't have the latest code on that machine.
<rick_h_> neiljerram: ok, so when you juju-deployer that bundle file it didn't get the latest bzr branch?
<neiljerram> rick_h_, Yes, that's it.
<bcsaller> is it too late to revise the "juju show-controller" (and similar) output so as not to put data in the keyspace. Its much harder to write validating schema for these things if things like the controller name are keys rather than data
<rick_h_> bcsaller: shoot an email to the list with concerns and we can get feedback from wallyworld/axw and look into it.
<bcsaller> rick_h_: yeah, sure, thanks
<hatch> is there any way I can debug a hung " Waiting for agent initialization to finish" on aws with juju 2.0
<hatch> it's been allocating for over 30minutes now
<cherylj> hatch: take a look at /var/log/cloud-init-output.log and if that shows that the jujud started, look at /var/log/juju/machine-x.log
<hatch> cherylj: the machines haven't been allocated on aws
<cherylj> hatch: ah, that's interesting
<hatch> is this cloud init output file on the controller?
<cherylj> hatch: no, it would be on the instance
<hatch> ahh ok, so yeah, no instance :)
<hatch> These machines were created via the GUI, lemme see if I can add units via the CLI
<cherylj> hatch: I'd look at the machine-0.log and search for the name of the service you deployed...  see if there are any errors around that
<hatch> alright
<cherylj> hatch: are you using beta1?
<hatch> cherylj: tip updated this morning
<cherylj> the GUI has some compatibility issues with 2.0-alpha2 and later
<hatch> not anymore!
<hatch> :)
<cherylj> yay!
<hatch> well...tip gui
<hatch> lol
<cherylj> hehe
<hatch> actually it's still missing some features, but we're very close
<cherylj> niiiice
<ChrisHolcombe> using the juju coordinator seems to be a little bit tricky.  On my little ceph monitor cluster of 3 only 2 out of the 3 hosts got the coordinator lock.  The 3rd didn't for some reason
<hatch> cherylj: the first instance of 'mariadb' in machine-0 log is: 2016-02-22 17:13:45 ERROR juju.state.unit unit.go:740 unit mariadb/0 cannot get assigned machine: unit "mariadb/0" is not assigned to a machine
<pmatulis> rick_h_: re your config, can you confirm 'tenant-name == your account #' and 'username == name you log in to UI with' and 'password == p/w to log in to UI with' ?
<ChrisHolcombe> sorry i should say charmhelpers coordinator*
<hatch> cherylj: so even via the CLI it doesn't ever appear to allocate any more machines on aws
<hatch> possible that tip is broken?
<cherylj> hatch: bleh, that would be bad.  Let me see if I can reproduce
<hatch> thanks cherylj - it's entirely possible that I have somehow broken this, but I was able to place units on the controller itself
<cherylj> hatch: were you just doing "juju deploy mariadb"?  or were there special options / config?
<hatch> I deployed the charms via the GUI, and scaled via the gui. When I tried to scale via the CLI I ran 'juju add-unit wordpress'
<hatch> ^ cherylj
<neiljerram> rick_h_, if you're still around - I don't think we reached a resolution in our conversation a little earlier...
<jcastro> any resolution on the lxd/2.0beta issues identified over the weekend? I would cry if I had to move back to 1.25.
<lazyPower> cory_fu : i've run into a context issue with relationships and peering :/  i dont recall if we solved this already.   > ValueError: Unable to determine default scope: no current hook or global scope
<lazyPower> iirc - i think we had to assign each unit in the conversation to the scope bits, and iterate them?
<cory_fu> Peers have to be unit scope.  Unit scope is incompatible with default conversations, so you always have to loop over self.conversations() and you can't ever use self.conversation() or self.set/get_remtoe
<lazyPower> its all in here - https://github.com/mbruzek/interface-tls/blob/master/peers.py
<lazyPower> found it, ta
<rick_h_> negronjl: sorry, had a phone call
<rick_h_> oops neiljerra...who left
<rick_h_> doh
<beisner> dosaboy, jamespage - ceph-* MP tests blocked on:  bug 1548478
<mup> Bug #1548478: nova.conf section [libvirt] is missing option rbd_pool <uosci> <ceph (Juju Charms Collection):New> <ceph-radosgw (Juju Charms Collection):New> <nova-compute (Juju Charms Collection):New> <https://launchpad.net/bugs/1548478>
<dosaboy> beisner: hmm not sure how that got through wihout breaking amulet, i removed that conf entry cause it is useless
<dosaboy> beisner: i'll see if i can fixup the tests
<beisner> dosaboy, fyi, it made it through b/c the nova-compute test doesn't involve ceph.
<beisner> in amulet that is
<dosaboy> beisner: tru say
<beisner> dosaboy, thanks for taking that
<dosaboy> beisner: it's just a n-c thing thought right? i mean it is not a but but more an amulet test issue in teh n-c charm
<dosaboy> s/but/bug
<beisner> dosaboy, right, the n-c test is good.  other charms that cross-validate that the rbd conf thing was in nova.conf will need test updates
<dosaboy> beisner: 10-4 got it
<dosaboy> beisner: i guess we need to land this first - https://code.launchpad.net/~hopem/charms/trusty/nova-compute/lp1548478/+merge/286833
<dosaboy> then the amulet fixes (coming up)
<beisner> dosaboy, ack
<dosaboy> beisner: amulet fixes all submitted but set to WIP until ^^ lands
<beisner> dosaboy, right on.  tyvm.
<dosaboy> beisner: thank you for raising the bug with all the necessary info :)
<beisner> dosaboy, happy to
<ChrisHolcombe> is it possible for one juju unit to call a function on another?
<lazyPower> ChrisHolcombe - relations and actions make anything possible :)
<ChrisHolcombe> lazyPower, so there's no issue with unit A calling functions on unit B?
<ChrisHolcombe> rpc style?
<lazyPower> Theres some glue code you'll want to patch the charm to listen to the API, and you'll have to give the charm juju API credentials
<lazyPower> but yes, its completely possible, and wouldn't cause an issue if you encapsulate the concerns properly
<ChrisHolcombe> lazyPower, +1000
<ChrisHolcombe> lazyPower, so how would i go about giving api creds to a unit?
<lazyPower> ChrisHolcombe - every time i've done it i've had marco or aisrael's help
<ChrisHolcombe> lazyPower, time for a hacky workaround then
<lazyPower> may the force be with you
<ChrisHolcombe> ;)
<stokachu> anyone run into this before: connection attempt for 172.16.0.150 failed: /var/lib/juju/nonce.txt does not exist
<stokachu> does it just take awhile for it to generate?
#juju 2016-02-23
<Gil> can anyone share a link with steps to install juju-gui charm to openstack liberty? thanks!
<lazyPower> stokachu usually that was a symptom of something being sick in my env, like the security group not opening the port to the mongo instance on the model controller, or maybe my maas bootstrap timeout wasn't long enough
<lazyPower> Gil - hey there, not really sure what you're asking. Do you have an openstack liberty provider available to you that you would like to consume in juju, and additionally deploy the juju gui?
<lazyPower> s/consume/model your applications/
<narindergupta> marcoceppi: hey marco
<gennadiy> hi everybody, may i use git launchpad repo to publish charms?
<bloodearnest> yo lazyPower, just looking over
<bloodearnest> https://docs.google.com/presentation/d/1a5l1bKX8dPwx21LkMQmp-zVjzOsgoTE_iQ2urD9znxk/edit?pref=2&pli=1#slide=id.g70d4533c6_2_140
<bloodearnest> and have some questions when you are about
<kjackal> hi everyone. I am manually provisioning an IPv6 only VM. Deployment of charms there fail with cannot get archive: Get https://api.jujucharms.com/charmstore/v4/trusty/mysql-35/archive: dial tcp 162.213.33.121:443: network is unreachable
<kjackal> Does juju handle this case or is it all up to the admin?
<pitti> hello
<pitti> what could be the cause of
<pitti> $ juju deploy --repository deployment/charms  local:trusty/langpack-o-matic
<pitti> ERROR charm not found in "/home/martin/ubuntu/langpack-o-matic/deployment/charms": local:trusty/langpack-o-matic
<pitti> the charm definitively exists:
<pitti> $ ls /home/martin/ubuntu/langpack-o-matic/deployment/charms/trusty/langpack-o-matic/
<pitti> config.yaml  hooks  metadata.yaml  README.md
<pitti> (juju 1.25 in xenial)
<pitti> I have another charm in that dir ("bootstrap-node"), and deploying that works fine
<pitti> $ ls /home/martin/ubuntu/langpack-o-matic/deployment/charms/trusty/bootstrap-node/
<pitti> hooks  metadata.yaml
<pitti> ah, nevermind! typo in "name:" in metadata.yaml
<pitti> typical "you have to ask, and then you'll figure it out" situation
<lazyPower> yo bloodearnest  o/
<bloodearnest> lazyPower, hey
<lazyPower> hows things?
<bloodearnest> good, sprint next week, so lots of prep
<bloodearnest> and trying to understand this new systemd world...
<lazyPower> gennadiy - (super late reply) - you can warehouse the code in git, however bzr is still the only ingestion method right now. There's a new feature coming that decouples your charms from DVCS which in turn provides instant publishing
<lazyPower> bloodearnest - i hear ya man!
<lazyPower> i upgraded to xenial on my primary workhorse and its been slow going getting the system stood back up. i need another weekend on it
<bloodearnest> so, tls layer
<lazyPower> still totally a thing :)
<bloodearnest> yeah
<lazyPower> anything specific i can help with?
<bloodearnest> my usage is very different though
<gennadiy> i have found bzr-sync module. it will sync my code from git to bzr
<bloodearnest> so, some thoughts:
<gennadiy> thanks for your response
<lazyPower> gennadiy - its a nice solution for a short term problem :)
<lazyPower> gennadiy also o/ great to see you in here
<bloodearnest> 1) a standard layer for generating certs would be useful
<bloodearnest> for us to use easyrsa, it would need to be installed by system packages
<bloodearnest> but we could use
<bloodearnest> it
<lazyPower> yeah?
<lazyPower> so, we need to put easy-rsa in a PPA or is that still a red-flag?
<bloodearnest> 2) there are 2 distinct uses of tls certs here: intra service comms (your layer/interface), and public service comms
<jrwren> kjackal: most charms have no support for ipv6. Its not that they don't support them but they've never been tested there, so they often do things that do not support ipv6.
<lazyPower> bloodearnest - yeah, matt and I have talked about this, the public facing ssl bits. we dont have a path forward with any time alotted to get that done
<mbruzek> heyo
<lazyPower> but we've been kicking around ideas. we started with an idea to wrap lets-encrypt as a layer
<jrwren> kjackal: e.g. take an ip address from juju and curl http://thatip/   which doesn't work because it needs to be wrapped in [] to be a valid ipv6 url.
<lazyPower> bloodearnest - if you were going to put public facing ssl infrastructure in your modeling language. what would be your preferred method to do so?
<bloodearnest> lazyPower, ppa won't work, but we could add a package to our archives, perhaps. I'm also not sure how much control it provides. Can it do SubjectAlternativeName for DNS *and* IP?
<lazyPower> yeah
<lazyPower> it already does add SAN for DNS and IP
<bloodearnest> k
<lazyPower> i think we can tune the config to include a config option for additional SAN
<lazyPower> right now we're kind of lazy about what we stuff in the SAN, ip and hostname
<lazyPower> but it supports both entry styles
<bloodearnest> lazyPower, so, the 2 uses are different enough to warrent different approach, and different interfaces, I suspext
<lazyPower> oh for sure
<lazyPower> self signed certs vs ca signed sergs
<lazyPower> s/sergs/certs/
<kjackal> jrwren: True. So to have juju on an ipv6 setup we need a translation service to ipv4
<lazyPower> mbruzek o/ morning
<mbruzek> heyo
<lazyPower> mbruzek we're talking about our baby
<lazyPower> > re: layer-tls
<mbruzek> is my baby ugly?
<lazyPower> its our ugly babby
<mbruzek> say it isn't so!
<lazyPower> :P nah
<lazyPower> bloodearnest was just riffing about how we can make it more useful to more ppl
<bloodearnest> http://bazaar.launchpad.net/~bloodearnest/charms/trusty/x509-cert/trunk/view/head:/lib/selfsigned.py
<bloodearnest> is what we want, in terms of self signed cert
<lazyPower> they need a deb package of easyrsa. aparently fetching it from where we're grabbing it is basically out of sorts
<bloodearnest> the DNS/IP thing is an openssl/gotls thing
<lazyPower> bloodearnest we can do these alt_names no prob
<bloodearnest> gotls is stricter and wants proper SANs
<bloodearnest> cool
<lazyPower> right on
<lazyPower> so, matt and i are spiking on k8s this week
<lazyPower> if you want to file some bugs @ the repo for layer-tls we can start planning and try to get it on our board
<bloodearnest> so, I think what we want is a subordinate charm, so we can configure multiple certs for one service (e.g. apache, haproxy)
<bloodearnest> lazyPower, ack
<mbruzek> we talked about a subordinate charm that is a good idea, I wonder if both can be built from the same layer so we don't have to maintain two different codebases
<bloodearnest> mbruzek, you can build 2 charms from 1 layer?
<mbruzek> yes
<mbruzek> bloodearnest: I would see a subordinate layer that imports tls, and just has metadata that makes it a subordinate
<mbruzek> Then add the functionality you and lazypower were discussing to the tls-layer
<bloodearnest> I suspect the interface types will be different (1 for peer negotiation, 1 for simple path communication)
<bloodearnest> right
<lazyPower> path of least resistance
<mbruzek> bloodearnest: The subordinate could have a tls provides relation and or requires, and you would have to extend our tls interface which *only* deals with the peer relation at this point
<bloodearnest> right
<mbruzek> Again those could be done in the reusable tls layer and interface
<mbruzek> Your subordinate layer would be extremely small, just making it a subordinate and using the provided functionality
<bloodearnest> so they'd be differentiated on relation type (provides, peer) rather than name?
<mbruzek> bloodearnest: yes
<bloodearnest> wfm
<mbruzek> the beauty of layers!
<mbruzek> reusable components
<bloodearnest> so, the issue is, how best to generate certs, preferably using just system packages
<bloodearnest> or python deps
<bloodearnest> the python version above works fine in xenial, fwiw, python-cryptography is in main
<bloodearnest> but not trusty :(
<mbruzek> well the current tls layer uses easyrsa (as lazypower) pointed out, if that is not sufficient you can suggest alternate methods.
<mbruzek> In Juju 2.0 you can in the metadata.yaml specify what release your charm supports.
<bloodearnest> any method is fine, as long as it's a) vendored or b) system packaged
<bloodearnest> grabing from git is a no-no
<lazyPower> bloodearnest how about resources?
<bloodearnest> that would work also
<bloodearnest> but require some manual prepping
<lazyPower> what if easyrsa were exposed as a charm resource, you the deployment engineer stuff @ resources in your model-controller, and when you deploy boom its all offline.
<bloodearnest> nicer if it could work OOTB
<bloodearnest> the self signed stuff is really only for devel
<bloodearnest> plus, we are a ways away from being on 2.0
<mbruzek> bloodearnest: I used easyrsa from github because it had some bug fixes I needed. if you can get the repo one working submit a pull request for that.
<bloodearnest> mbruzek, ok, I will try that
<mbruzek> easy-rsa latest github release 3.0.1, vs the latest easy-rsa in Xenial is 2.2.2
<mbruzek> I know you have rules against github, but I question if our rules move at the speed of modern software
<bloodearnest> mbruzek, not my rules
<mbruzek> bloodearnest: I know
<mbruzek> my point is you are 4 releases behind upstream, and the rules are supposed to be for "security", I would want the latest release if I were doing it for myself.
<mbruzek> It would be great if we could create a snap of the latest release and put that in the charm.  If snaps can be trusted like archive
<lazyPower> i have this appliance docker image i use to print some certs sometimes
<bloodearnest> mbruzek, can't we just vendor it in some other form
<lazyPower> caveate: you install docker on every host you want to generate certs
<bloodearnest> it's just a cli wrapper around openssl cli,
<bloodearnest> right?
<lazyPower> pretty much
<lazyPower> but its based on busybox so its stupid small
<lazyPower> thats the only saving grace here
<bloodearnest> lazyPower, mbruzek: easy-rsa 3.0.1 is 140k of text files (40k of docs)
<bloodearnest> seems reasonable to vendor into the layer?
<mbruzek> kilobytes?
<bloodearnest> yep
<mbruzek> bloodearnest: yeah I think we are OK there, I would get worried about gb
<mbruzek> bloodearnest: you were going to change it to the package manager anyway... I am sure that one is less kilobytes right?
<bloodearnest> probably about the same
<mbruzek> bloodearnest: I get it, our current layer does it wrong by grabbing from github.
<bloodearnest> I think w/o docs and extras, you're talking about 60kb
<cory_fu> kjackal: I don't know if we'd need a translation layer really, we just need to audit charms and start testing them in IPv6 environments to ensure they're coded to be IPv6 aware.  That said, your error seems like Juju couldn't connect to the charm store via IPv6 which seems even more fundamental than charms supporting IPv6.
<bloodearnest> mbruzek, not objectivelywrong, perhaps, but wrong for us :)
<cory_fu> I wonder if rick_h_ can chime in on whether there are issues between Juju and the charm store in IPv6
<mbruzek> bloodearnest: so help us fix it so it is _more_ useful
<bloodearnest> mbruzek, on it
<rick_h_> cory_fu: yes! :)
<cory_fu> Yes you can chime in, or yes there are issues?  ;)
<rick_h_> cory_fu: was just talking with kjackal about this in another channel and asked him to kick off an email because we expect problems with the store and the charms in them to be honest
<cory_fu> Ah, I see
<rick_h_> cory_fu: the charmstore is fronted by apache2 with SSL terminitaion and only works on IPV4, we'll have to work with IS on how to add IPV6 support
<rick_h_> cory_fu: but there's a bigger issue as to what/how charms would work?
<rick_h_> can wordpress, exposed, work with IPV6 ootb behind haproxy?
<Gil> hi lazyPower thanks.  never had any problems when deploying juju-gui on trusty.  When I try deploy on liberty, I get this:  "ERROR cannot resolve charm URL "cs:wily/juju-gui": charm not found".  in my environments.yaml I have "default-series: wily" in the maas section.  I needed wily and liberty to deploy this successfully:  https://jujucharms.com/u/openstack-charmers-next/openstack-lxd
<rick_h_> same with all services that can be exposed, how many don't support binding to an IPV6 addr
<cory_fu> True
<rick_h_> Gil: do you need the GUI running on wily?
<rick_h_> Gil: are you colocating it with another Wily services or something?
<jrwren> rick_h_: and the inverse too, a cloud may be ipv4 on CGN or private IP, but support ipv6 public addresses
<rick_h_> jrwren: yea, there's a whole can of worms here we've not worked through to my knowledge
<bloodearnest> mbruzek, so, the bugfixes you needed are not in 3.0.1, correct?
<mbruzek> bloodearnest: I don't recall what version was needed, but the github version fixed the error I was getting
<bloodearnest> right
<bloodearnest> I will attempt a PR to use a vendored version
<Gil> rick_h my main goal is to work with the nova lxd which is why I'm deploying that bundle. yes, i'd like to run the juju-gui on the liberty deploy if it's possible.
<rick_h_> Gil: right, but you only need wily GUI if it's on a wily host. You can deploy the trusty juju-gui onto liberty without a problem
<rick_h_> Gil: the things you deploy don't all have to be on the same series.
<Gil> ok gtk.  when I deployed the bundle: https://jujucharms.com/u/openstack-charmers-next/openstack-lxd it errored out and complained about the charms not matching the series so I went to "all -wily".  So what I need to do then is change in environments.yaml back to "default-series: trusty" I guess which I will try now.
<lazyPower> Gil - that or juju deploy trusty/juju-gui
<lazyPower> tvansteenburgh - got a moment for a quick review? https://github.com/juju-solutions/jujubox/pull/2
<tvansteenburgh> lazyPower: yeah gimme a min and i'll take a look
<Gil> juju deploy --to 0 cs:trusty/juju-gui; gives:  Added charm "cs:trusty/juju-gui-48" to the environment. + ERROR cannot assign unit "juju-gui/1" to machine 0: series does not match
<lazyPower> ah thats because the state-server is wily, gotchya.
<lazyPower> i didnt think you were colocating
<tvansteenburgh> lazyPower: won't this break 1.25 users? maybe we should put these changes in a 2.0 branch?
<lazyPower> tvansteenburgh - its for :devel flavored
<lazyPower> this doesn't change :latest
<tvansteenburgh> right, but this will eventually become latest i expect
<lazyPower> one 2.0 lands
<tvansteenburgh> and then we'll have nothing for 1.25
<lazyPower> current :latest: will move to a tag for 1.25
<tvansteenburgh> roger
<lazyPower> and :dev will supplant :latest, and :dev moves to whatever is in the :devel ppa
<tvansteenburgh> lgtm then
<lazyPower> sweet \o/ progressss
<tvansteenburgh> lazyPower: yeah thanks for doing that
<lazyPower> https://hub.docker.com/r/jujusolutions/jujubox/builds/bke9qasy38rcy4s98ve2c9o/
<lazyPower> we were solid with no modifications for 8 months
<lazyPower> thats kind of impressive man. it wasn't until beta-1 landed that i had to dig in here and change some things
<lazyPower> hey rick_h_ - when i'm bootstrapping with 2.0 beta-1,  i get that env vars make it simple but is there an option for me to pass --config=path/to/aws.yaml to get my cloud keys?
<lazyPower> the cloud credentials file i use for create-model dont seem to work for bootstrap :|
<rick_h_> lazyPower: yea, you have to write out a .local/share/juju/credentials.yaml file with named credentials in it
<rick_h_> lazyPower: will get you an example in a sec
<lazyPower> ta
<Gil> "In order to deploy a cs: Trusty charm to an alternate series machine, the charm must be locally branched to a <series>/<charm-name> directory, then juju deployed from that local repo." from link https://github.com/Ubuntu-Solutions-Engineering/openstack-installer/issues/791
<Gil> is that what I would need to do at this point?
<lazyPower> thats an option, or re-bootstrap with a different default-series
<Gil> lazyPower changed environments.yaml to "default-series: trusty" then bootstrapped and successfully (as expected) deployed juju-gui.  But then when the bundle "juju-deployer -c https://api.jujucharms.com/charmstore/v4/~openstack-charmers-next/bundle/openstack-lxd-50/archive/bundle.yaml -S -d" is deployed get "Added charm "cs:~openstack-charmers-next/wily/ceph-osd-15" to the environment. + ERROR cannot assign unit "ceph-osd/0" to machine 0:
<lazyPower> Gil you'll have to modify the bundle to change the placement of ceph-osd
<Gil> some solutions would be 1 extra machine for juju-gui or just use the local repo method
<lazyPower> you'll need to co-locate it with another wily based service
<Gil> ah
<beisner> jamespage, fyi - updated the syncs and gh repos yesterday and they're syncing ok.   https://github.com/openstack-charmers/migration-tools/blob/master/charms.txt    +lxd +ceph-osd +percona-cluster
<jamespage> beisner, ack - have the git review ready to push again
<beisner> sweet
<beisner> +ceph-mon that is
<beisner> osd was already good
<jamespage> yah
<ChrisHolcombe> can any unit do a leader set or can only the leader perform that?
<roadmr> ChrisHolcombe: "Only the leader can write to the bucket"
<ChrisHolcombe> roadmr, darn i was hoping you weren't going to say that haha
<roadmr> ChrisHolcombe: sorry :) straight from the docs: https://jujucharms.com/docs/1.25/authors-charm-leadership
<ChrisHolcombe> roadmr, ah yeah i missed that line.  thanks :)
<jamespage> beisner, hey - could you take a look at https://code.launchpad.net/~james-page/charms/trusty/neutron-gateway/tox8-test-refactor/+merge/286933
<jamespage> needed for migration - also some prep for my neutron explosion branches
<jamespage> beisner, there was some fairly nasty lack of isolation between unit tests...
<jamespage> mainly due to massaging of CONFIG_FILES directly - moved to deepcopy + modification now
<beisner> jamespage, yah i've been had by copies of dicts in py too.  deepcopy ftw.
<beisner> jamespage, want to flip `make test` to the tox method on that?
<beisner> and lint
<thedac> bdx: cargonza tells me you have an active issue that needs looking at?
<bdx> thedac: hey, yeah, do you mind?
<thedac> no, what's up?
<bdx> I have been experiencing an issue where my nova-metadata-api seems to become unavailable after service restart ...
<bdx> my instances can talk to 169.254.169.254 initiall after creating tenant networks
<bdx> following that, if I restart the api-metadata service or reboot the node 169.254.169.254 becomes unavailable to the instances
<thedac> hmm, Could be MTU. Do these hosts have jumbo frames on?
<thedac> Espectially after a reboot
<thedac> especially
<bdx> ok, I'm not using jumbo frames. The issue presents itself w/o instance reboot
<thedac> So that is my first suggestion. We definitely see problems with metadata when using default MTU of 1500. If at all possible set it on the neutron-gateway and the nova-compute nodes, restart the nova-api-metadata service and check
<bdx> thedac: after some introspection, I'm not seeing the 169.254.169.254 address or interface on my compute nodes ...
<bdx> ip netns list show only qrouter-283684d1-6e4d-4704-a72e-6fe6acc8e9a6
<thedac> bdx: they don't it is a special address space
<thedac> It uses multicast
<bdx> ok, gotcha. Do you know of ways to test for its existance from outside of the instance?
<thedac> The best test is on the instnace. But verify nova-api-metadata is listening on 8775
<thedac> let me double check that port. That is off the top of my head
<bdx> thedac: also, I'm not seeing the metadata api service show itself here http://paste.ubuntu.com/15182540/
<thedac> Yeah, it does not show up in the service list. So check it on neutron-gateway or if you are doing metadata on the compute nodes check there. 8775 is correct
<bdx> sudo service nova-api-metadata status
<bdx> nova-api-metadata start/running, process 432790y
<bdx> y
<thedac> ok
<thedac> bdx: And this shows up in console-logs as failed access to metadata correct?
<bdx> yes.
<bdx> thedac: `ps aux | grep metadata` -> http://paste.ubuntu.com/15182600/
<thedac> ok, so I am still thinking MTU
<bdx> it seems there is a wealth of metadata processes running
<bdx> ok
<thedac> You can test this by running ping with larger and larget packets sizes on the qrouter netns
<bdx> entirely, ok
<thedac> ping -s 1472 I think
<bdx> `sudo ip netns exec q-router<#> ping -s 1472 169.254.169.254` ?
<thedac> yes
<thedac> and then with 1473 or higher
<thedac> oh, sorry
<thedac> no the IP of the instance
<thedac> not the 169.254 address
<bdx> ohh.. ping an instance?
<thedac> yes
<thedac> bdx: and regardless our best practice advice is to use jumbo frames in all openstack deploys
<bdx> thedac: good to know
<bdx> thedac: my pings are successful
<thedac> with > 1472?
<bdx> yea
<thedac> ok let's check /var/log/nova/nova-api-metadata.log for any tracebacks
<bdx> I'll check again ... my logs have been clean though
<bdx> ok
<bdx> so
<bdx> ERROR oslo.messaging._drivers.impl_rabbit [req-85055fc4-7de0-46c9-8cc2-119c0eda3430 - - - - -] AMQP server on  is unreachabl
<bdx> rabbit was actually my initial suspect because I am seeing stale notifications in the rabbit queue
<thedac> It is always rabbit ;)
<bdx> but hadn't seen any errors yet ..
<thedac> lovely, ok so sounds like a rabbitmq problem. From a networking perspective can you nc -vz $RABBIT_IP 5672 from the nova-api-metadata host?
<thedac> Then we can check rabbitmq-server logs
<bdx> nc -vz 10.16.100.59 5672
<bdx> Connection to 10.16.100.59 5672 port [tcp/amqp] succeeded!
<thedac> ok, let's hope on the rabbit instance and check logs
<thedac> s/hope/hop but also hope
<bdx> ok, just launched an instance, that failed communicating with 169.254.169.254, rabbit logs show -> accepting AMQP connection <0.3547.1> (10.16.100.133:39614 -> 10.16.100.59:5672)
<thedac> Is rabbit clustered or singleton?
<bdx> singleton
<thedac> ok
<thedac> You might keep the tail on the rabbit log and restart the nova-api-metadata service and neutron-metadata service and see what we get
<bdx> ok
<bdx> omp
<thedac> It could have been temporary failure
<bdx> yea, I got a bunch of warning reports for about a second
<bdx> rabbit seems to be talkin to both services
<thedac> What were the warning messages
<thedac> ?
<bdx> =WARNING REPORT==== 23-Feb-2016::19:27:38 ===
<bdx> closing AMQP connection <0.19995.0> (10.16.100.157:42079 -> 10.16.100.59:5672):
<bdx> connection_closed_abruptly
<thedac> That could have been the stop of the metadata service depending on timing
<thedac> so you might test another instance deploy
<thedac> And watch the nova-api-metadata log as well as the rabbit log
<bdx> it was ... rabbit logs got spammed with that at the time I restart
<bdx> on it
<bdx> thedac: yea, no errors in any logs
<thedac> ok, fingers crossed for the instance
<bdx> neutron-api, neutron-gateway, nova-cloud-controller, nova-compute
<bdx> all show no errors
<bdx> instance gets stuck reaching out for metadata while booting
<thedac> ok, so I am going to keep pushing the MTU issue. metadata is suseptible to it.
<bdx> I can use the config drive as a workaround to get my user-data on to my instances for the time being ... I just feel this is really fragile though
<bdx> thedac: so .... If I create two new tenant networks, the instances get metadata just fine
<bdx> that are deployed to the new nets
<thedac> oh?
<bdx> yea
<bdx> or
<bdx> If I neutron net-delete
<bdx> and recreate the tenant networks that are affected, metadata works again
<thedac> hmm, ok, that is interesting
<bdx> right
<thedac> do they subsequently stop working or work indefinitely?
<thedac> after a re-create?
<bdx> thedac: metadata works until I restart the respective services, then stops working until I destroy and recreate again
<bdx> thedac: I'm suspicious this might be a permissions thing ...
<thedac> ok, and is this liberty?
<bdx> yea
<bdx> is the 169.254.169.254 a unix socket?
<thedac> I'll see if I can recreate this and get back to you
<bdx> thanks
<cory_fu> kjackal, lazyPower: https://github.com/juju-solutions/layer-basic/pull/37
<lazyPower> cory_fu - how can i not include some of the project meta files from base like Makefile and such?
<lazyPower> i can override that in layer.yaml right?
 * lazyPower makes a note to look at the builder readme
<cory_fu> lazyPower: I don't think there's a way to say "remove this file" (bcsaller might be able to correct me), other than just overriding it with an empty file (which isn't the same as deleting it)
<lazyPower> ah, ok.
<cory_fu> lazyPower: Maybe we need an "excludes" section in layer.yaml?
<lazyPower> I think thats a swell idea
<cory_fu> What is your use-case for that, though?
<lazyPower> just so i can say " excludes: readme, makefile, hacking.md - things like that so when i assemble my charm, if i dont have a hacking.md file, i dont have one floating around for one of the runtime layers
<cory_fu> Why would you not want those files, though?
<cory_fu> You definitely need a README
<cory_fu> And I can't see why you wouldn't also want a HACKING.md and Makefile
<magicaltrout> i sorta agree with that, i built a charm the other day and committed a load of shit for no reason other than i didn't notice it was in the output
<magicaltrout> admittedly I used bzr ignore, but still, it seems like a valid usecase, or maybe bzr ignore is the way to go! :)
<cory_fu> Actually, it looks like layer.yaml might already support an "ignore" list
<cory_fu> Yeah, you can give an ignore list that will do what you want
<cory_fu> lazyPower: ^
<lazyPower> cory_fu - ballin. I guess you found that in charm-tools docs?
<cory_fu> And it looks like it will work per-layer, so each layer can ignore things from the layer below (if you stack more than once)
<cory_fu> lazyPower: If by "docs" you mean "source"
<lazyPower> ah, right
<magicaltrout> lol
<lazyPower> one and the same, the tome of charm keeper knowledge
<cory_fu> Yeah, that should be documented, for sure
<lazyPower> ya know cory_fu - i just realized we didnt put in a reference guide to any of the layer stuff
<cory_fu> Yeah, that would be a good place for this
<cory_fu> There's a lot of functionality built in to layer.yaml alone
<lazyPower> can we card this and bring it up later this week?
 * lazyPower has k8s stuff thats stale and needs cooked
<cory_fu> Where would that card go?
<lazyPower> i'll take it and put your face on it
<cory_fu> ha
<lazyPower> that..sounded way creepier than i intended
<cory_fu> Indeed
<lazyPower> anywho, incoming notice
<lazyPower> so, word to the wise. series in metadata will make the -stable tooling (bundletester, proof) quite angry as i just found out.
<bdx> it seems when I deploy my same stack in kilo, then in liberty, my nova-api-metadata changes its location from the dhcp port (kilo), to the network:router_interface_distributed port (liberty). Was this intended? Do you know about it?
<bdx> thedac, openstack-charmers:^
<thedac> bdx: hey
<bdx> thedac: whats up
<thedac> I just had a liberty deploy up and was testing. I saw no change.
<thedac> bdx: are you doing DVR for this?
<bdx> thedac: yeah
<bdx> thedac: and local dhcp/metadata
<thedac> Ok, that is what I need to test next. I could not re-create the failure. So I will stand up a DVR deploy and keep trying
<thedac> right
<bdx> thedac: so yea, whats going on here is this -> I deploy my same stack in kilo, then in liberty, nova-api-metadata changes its location from the dhcp port (kilo), to the network:router_interface_distributed port (liberty)
<thedac> ok
<bdx> So in my spinup bash script for creating tenant networks, I did not know/had not updated the network params to reflect the change of nova-api-metadata
<bdx> i.e for kilo -> neutron subnet-update vlan110-subnet \
<bdx>   --host_routes type=dict list=true \
<bdx>   destination=10.0.20.0/24,nexthop=10.16.110.1 \
<bdx>   destination=10.15.0.0/16,nexthop=10.16.110.1 \
<bdx>   destination=10.10.0.0/16,nexthop=10.16.110.1 \
<bdx>   destination=10.16.100.0/24,nexthop=10.16.110.99 \
<bdx>   destination=10.16.111.0/24,nexthop=10.16.110.99 \
<bdx>   destination=10.16.112.0/24,nexthop=10.16.110.99 \
<bdx>   destination=169.254.169.254/32,nexthop=10.16.110.101
<bdx> but for liberty --> neutron subnet-update vlan110-subnet \
<bdx>   --host_routes type=dict list=true \
<bdx>   destination=10.0.20.0/24,nexthop=10.16.110.1 \
<bdx>   destination=10.15.0.0/16,nexthop=10.16.110.1 \
<bdx>   destination=10.10.0.0/16,nexthop=10.16.110.1 \
<bdx>   destination=10.16.100.0/24,nexthop=10.16.110.99 \
<bdx>   destination=10.16.111.0/24,nexthop=10.16.110.99 \
<bdx>   destination=10.16.112.0/24,nexthop=10.16.110.99 \
<bdx>   destination=169.254.169.254/32,nexthop=10.16.110.99
<thedac> bdx: ok, so is that working for you when you changed the route nexthop?
<bdx> thedac: yea
<thedac> ok, great. I'll figure out why things changed.
<bdx> thedac: an interesting difference --> in kilo, when you update your subnet, you must include the destination,nexthop for the 169.254 address because the host route for metadata is not automatically appended to the list of new host routes
<thedac> I was going to ask. We do not add the 169.254 route in our testing.
<bdx> so in kilo, I can update my subnet, and lose my 169.254 static route if I do not add it to the update
<bdx> in liberty, the nexthop/destination for metadata is appended to static routes automatically
<bdx> unless you override it by specifying a user defined route for 169.254 as I was
<bdx> wow
<thedac> got it so, may be a bug in kilo
<bdx> totally
<bdx> thanks for your help on this
<thedac> no problem
#juju 2016-02-24
<gennadiy> Hi everybody. could you help me with my bundle again? https://code.launchpad.net/~dataart.telco/charms/bundles/dataart-telco-demo/bundle
<neiljerram> gennadiy, what help do you need?
<gennadiy> i know that somebody could check error log
<gennadiy> my bundle is not bublished yet. so i think there are some errors
<gennadiy> *published
<neiljerram> Have you posted the error log somewhere?
<magicaltrout> I think gennadiy means errors to the charm store
<gennadiy> yes, you are right
<gennadiy> i reviewed my bundle manually and run bundle proof, seems everything is ok
<gennadiy> anyway do we have some mechanism to get publish error? maybe by email or web portal?
<neiljerram> gennadiy, OK - I'm sorry then, because I have not reached that stage myself yet :-)
<gennadiy> before pushing i checked bundle on local env. i imported it to juju-gui.
<magicaltrout> there's nothing that is publicly published gennadiy
<magicaltrout> its all magic behind the scenes stuff
<magicaltrout> i'm not sure how bundles are deployed though gennadiy as I've just done single charms
<magicaltrout> but you don't have a a ubuntu version in your charm push path, dunno if that has any impact or not
<magicaltrout> ah i see it in the docs
<magicaltrout> ignore me
<magicaltrout> although looking at the requirement to file a bug
<magicaltrout> it might be a manual process unlike general charm deployment
<magicaltrout> so you might just be sat in the queue gennadiy
<magicaltrout> https://jujucharms.com/docs/1.25/charms-bundles
<magicaltrout> step 4, 5, 6 and then
<magicaltrout> "
<magicaltrout> Someone will come along and review your bundle for inclusion."
<magicaltrout> looks like you just need to wait it out
<gennadiy> there are 2 different types of bundles: with public namespace(like official version) and user space(with user name prefix in path). so i use user space bundles. the should be deployed automatically
<gennadiy> generally it takes 40 min
<gennadiy> if you have not got result during this time your bundle has error :)
<gennadiy> a few people in this chat has access to error log.
<magicaltrout> well I dunno, for charms thats certainly true
<magicaltrout> those docs seem to suggest bundles are different
<magicaltrout> but I have no idea :)
<gennadiy> is anybody joined who has access to publish error logs?
<jrwren> gennadiy: we don't exactly, but i can look at other things.
<gennadiy> jrwren: do you have possibility to check why bundle is not published yet?
<jrwren> i don't.
<stub> Starting from the example at https://github.com/cmars/nrpe-external-master-interface, do I also need an upgrade-charm hook wired up to initialize or reintialize the nrpe-external-master.available handlers?
<dweaver> Hi All, I have a Juju environment on 1.25.0 and I am getting a service constantly reconfigured.  I have 3 cinder units working fine, but then it starts to run cluster-relation-changed, says unit is ready, and then does it again, and again .....  Restarting the services constantly in the process, nagios is detecting it as down.  The cluster relations have not changed, so I am wondering how you tell why Juju is running this constantly in a loop?  Debug-l
<dweaver> og doesn't seem to tell me what triggers hooks.
<dweaver> Is there somewhere I can dig further to find out?
<jrwren> gennadiy: bundle verification failed: ["cannot validate service \"drop-conference\": configuration option \"DID_DOMAIN\" not found in charm \"cs:~dataart.telco/trusty/drop-conference-1\"","cannot validate service \"drop-conference\": configuration option \"PHONE_NUMBER\" not found in charm \"cs:~dataart.telco/trusty/drop-conference-1\""]
<magicaltrout> I know we touched on some of this stuff in Ghent, are things like that --^ going to become available to developers without having to prod you guys in Juju 2.... and the new review queue etc
<magicaltrout> a feedback mechanism about what your charm/bundle is upto in the deployment process at least
<jrwren> magicaltrout: yes, there will be no ingestion and you will be able to upload directly to charmstore yourself.
<magicaltrout> boom
<magicaltrout> good to know
<lazyPower> axino o/ heyo
<axino> hey lazyPower
<axino> o/
<lazyPower> axino i have it on good authority you guys have a bomb-diggity logstash charm over in IS
 * axino gives lazyPower a Chouffe
<lazyPower> Woo
<lazyPower> this early?
<rick_h_> lazyPower: maybe hit up jrwren as well. I know we were playing with one
<lazyPower> rick_h_ i'm already all over that one :)
<lazyPower> rick_h_ - its claimed as abandoned, but i think its about to get some TLC
<rick_h_> lazyPower: gotcha
<neiljerram> Anyone know why 'juju deployer -b ...' would fail to download code from a launchpad branch?
<neiljerram> Here's the transcript:
<neiljerram> root@calico14:~/md4# rm -rf trusty/
<neiljerram> root@calico14:~/md4# juju deployer -b -c liberty.yaml
<neiljerram> 2016-02-24 14:59:49 Using deployment envExport
<neiljerram> 2016-02-24 14:59:49 Starting deployment of envExport
<neiljerram> 2016-02-24 15:00:50 Could not branch lp:~project-calico/charms/trusty/neutron-api/liberty to trusty/neutron-api
<neiljerram>  bzr: ERROR: [Errno 2] No such file or directory
<neiljerram> 2016-02-24 15:00:50 Deployment stopped. run time: 61.54
<neiljerram> And in .bzr.log I have:
<neiljerram> 0.097  bazaar version: 2.7.0dev1
<neiljerram> 0.097  bzr arguments: [u'co', u'--lightweight', u'lp:~project-calico/charms/trusty/neutron-api/liberty', u'trusty/neutron-api']
<neiljerram> 0.104  looking for plugins in /home/ubuntu/.bazaar/plugins
<neiljerram> 0.104  looking for plugins in /usr/lib/python2.7/dist-packages/bzrlib/plugins
<neiljerram> 0.129  encoding stdout as sys.stdin encoding 'UTF-8'
<neiljerram> 0.242  ssh implementation is OpenSSH
<neiljerram> 15.043  creating branch reference in file:///home/ubuntu/md4/trusty/neutron-api/
<neiljerram> 17.685  trying to create missing lock '/home/ubuntu/md4/trusty/neutron-api/.bzr/checkout/dirstate'
<neiljerram> 17.719  opening working tree '/home/ubuntu/md4/trusty/neutron-api'
<neiljerram> 18.766  Traceback (most recent call last):
<neiljerram>   File "/usr/lib/python2.7/dist-packages/bzrlib/commands.py", line 930, in exception_to_return_code
<neiljerram>     return the_callable(*args, **kwargs)
<neiljerram>   File "/usr/lib/python2.7/dist-packages/bzrlib/commands.py", line 1121, in run_bzr
<neiljerram>     ret = run(*run_argv)
<mgz> neiljerram: please, pastebin
<neiljerram>   File "/usr/lib/python2.7/dist-packages/bzrlib/commands.py", line 673, in run_argv_aliases
<neiljerram>     return self.run(**all_cmd_args)
<neiljerram>   File "/usr/lib/python2.7/dist-packages/bzrlib/commands.py", line 697, in run
<neiljerram>     return self._operation.run_simple(*args, **kwargs)
<neiljerram>   File "/usr/lib/python2.7/dist-packages/bzrlib/cleanup.py", line 136, in run_simple
<neiljerram>     self.cleanups, self.func, *args, **kwargs)
<neiljerram>   File "/usr/lib/python2.7/dist-packages/bzrlib/cleanup.py", line 166, in _do_with_cleanups
<neiljerram>     result = func(*args, **kwargs)
<neiljerram>   File "/usr/lib/python2.7/dist-packages/bzrlib/builtins.py", line 1667, in run
<neiljerram>     accelerator_tree, hardlink)
<neiljerram>   File "/usr/lib/python2.7/dist-packages/bzrlib/branch.py", line 1469, in create_checkout
<neiljerram>     hardlink=hardlink)
<neiljerram>   File "/usr/lib/python2.7/dist-packages/bzrlib/bzrdir.py", line 907, in create_workingtree
<neiljerram>     accelerator_tree=accelerator_tree, hardlink=hardlink)
<neiljerram>   File "/usr/lib/python2.7/dist-packages/bzrlib/workingtree_4.py", line 1565, in initialize
<neiljerram>     delta_from_tree=delta_from_tree)
<neiljerram>   File "/usr/lib/python2.7/dist-packages/bzrlib/transform.py", line 2543, in build_tree
<neiljerram>     del
<magicaltrout> neiljerram: let me introduce you to pastebin or gist! ;)
<neiljerram> Sure, sorry!
<neiljerram> So here's the .bzr.log: http://pastebin.com/rMuqrLsL
<neiljerram> (Have I done that right?)
<mgz> neiljerram: yes. and the branch just has a borked symlink
<mgz> neiljerram: and you're the last commit on it
<mgz> so... just fix that?
<neiljerram> mgz, You mean there's a broken symlink somewhere in the working tree?
<neiljerram> mgz, I'm sorry, I'm not seeing the problem.  So if you are, please do say more!
<neiljerram> However, it occurs to me that I should be able to use strace to see the failing symlink call...
<mgz> that's not super useful
<mgz> because it's just a limbo name
<mgz> neiljerram: so, just get the branch with `bzr branch --no-tree lp:...`
<mgz> then you can do `BZR_PDB=1 bzr reconfigure --tree` and drop in at the error site
<neiljerram> I guess these are the relevant calls: http://pastebin.com/8Bmt9axy
<mgz> but getting the borked filename out is mildly annoying
<magicaltrout> don't ya just love bzr weirdness! ;)
<neiljerram> 'limbo'?  I see that in the strace too, but I have no idea what it means here.
<mgz> bzr stages files when creating trees
<mgz> doesn't overwrite in place
<magicaltrout> reminds me of how sad perforce used to make me ;)
<neiljerram> I'm still not understanding a lot of the terminology here.
<neiljerram> But I think you're saying that the bzr history includes a commit somewhere that added a broken symlink.
<neiljerram> And because of that, bzr is always going to fail for me with this branch.  Is that right?
<neiljerram> Anyway, I will try the commands you recommended!
<magicaltrout> thats basically it, and you need to extract a copy without a working tree to unbork it
<magicaltrout> not that i'm any more clued up on bzr either
<mgz> neiljerram: hooks/install
<mgz> neiljerram: the target is "" - which is bogus
<neiljerram> magicaltrout, thanks
<neiljerram> mgz, thanks also!
<mgz> I can just fix this, might even have push rights to the branch
<mgz> commit 142 "unknown <james.page@ubuntu.com>" dunno what he did to screw this, but goodjob
<mgz> history rewriting gone wrong? something else?
<mgz> jamespage: your git switch breaking things? ^
<neiljerram> Here's what I see with the bzr reconfigure session: http://pastebin.com/XWG7LFsG
<neiljerram> Does 'new-15' mean that the problem was introduced in revision 15 ?
<jamespage> mgz, not yet
<mgz> neiljerram: nope, it's just a placeholder name in the stagin area
<mgz> neiljerram: it's actually more useful to do `bzr diff -r0..-1 and grep the diff for the symlink lines
<neiljerram> OK...
<mgz> jamespage: I am confused as to how a change from september that made it impossible to checkout a tree wasn't seen till today
<mgz> without history rewriting
<jamespage> I don't even know how todo that :-)
<mgz> jamespage: well, I guess the branches don't have push overwrite protection on lp? any one or any bot could have changed it?
<jamespage> mgz, we've not done any push --overwrites for a while now
<beisner> mgz, jamespage - afaik, 15.04 was the last push --overwrite on ~openstack-charmers space
<neiljerram> Do you already know whether the upstream openstack-charmers branch has the same problem?  I wonder if something went wrong when I pushed my fork of that to lp:~project-calico/charms/trusty/neutron-api/liberty
<neiljerram> Hmm; I don't see the same problem if I use lp:~openstack-charmers/charms/trusty/neutron-api/next instead of my fork.  So I guess there was a problem in how I pushed my fork.  I'll try killing and recreating my fork.
<jamespage> neiljerram, you might be please to know that as of next week, openstack charm development should be under the openstack project so git/gerrit rather than bzr/launchpad....
<neiljerram> jamespage, Yes, I will be very pleased about that!  Thanks for your work on this.
<gennadiy> jrwren: thanks. i have to disable copy/paste functionality
<jose> mbruzek: so, Ryan's in? :D
<neiljerram> Hi all, now seeing a problem with bootstrap, I wonder if anyone might help?
<jose> neiljerram: sure!
<jose> neiljerram: what's going on?
<neiljerram> Appears problem is that machine-0 doesn't manage to bring up juju-br0
<neiljerram> And when that happens, I can't ping the machine; the logs that I can see in the vSphere console indicate that it has lost its IP address.
<neiljerram> If I deploy the same machine using just MAAS, it's fine.
<neiljerram> (Using Trusty)
<jose> what substrate are you deploying on?
<neiljerram> Not sure what 'substrate' is - MAAS?
<jose> yep, what are you deploying on (aws, maas)
<neiljerram> Yes, MAAS then.
<jose> ^those are just examples
<neiljerram> And if I just use MAAS to deploy the same machine, it's fine.
<jose> hmmk. I'm not too much into maas, but maybe there's something missing in Juju, so let's take a look!
<neiljerram> I.e. it has an IP and I can log in via ssh.
<jose> can you please execute `juju bootstrap --debug` and paste the results on paste.ubuntu.com?
<neiljerram> Sure...
<jose> thanks :D
<neiljerram> By the way, what is meant by the juju "tools" ?
<jose> the juju tools are basically what enables juju on the machines
<jose> all the binaries
<neiljerram> Here's `juju bootstrap --debug`: http://pastebin.com/cVJP56EN
<lazyPower> neiljerram - which version of maas are you using?
<neiljerram> MAAS Version 1.8.3+bzr4053-0ubuntu1 (trusty1)
<neiljerram> (that's what it says at the bottom of my MAAS web page)
<lazyPower> yep, just making sure it wasnt 1.9, i dont think 1.9 works with 1.25 - but dont hold me to that.
 * lazyPower hasn't investigated lately
<jose> hmm, from what I read juju is not creating anything, but assuming the bridge is already there
<neiljerram> can you see that in those logs?
<neiljerram> BTW, I should add, bootstrapping was working for me yesterday, and AFAIK I haven't changed anything relevant.
<neiljerram> Between yesterday and today, I've worked on and resolved another problem to do with the exact charm code that I am trying to deploy.  But I would not expect that to be relevant to the bootstrap phase.
<neiljerram> So it's quite mysterious why it has stopped working.
<neiljerram> Is there a way I can see the complete log of what Juju did to the machine after it had MAAS-booted?
<neiljerram> Or can I configure a password for 'root' or 'ubuntu' on that machine (instead of an SSH key), so that I can log in to its console?
<jose> sorry, I had to run do something but I'm back around
<jose> hmm, let me remember where those logs are stored
<neiljerram> No worries - thanks for looking at this!
<jose> happy to help :)
<neiljerram> I do have to run in about 5 minutes, though - so no pressure :-)
<jose> ok, so the machine is actually created, but juju cannot connect, right?
<neiljerram> correct
<neiljerram> I can't ping or connect with SSH, either
<jose> ok, so can you please ssh into the machine?
<neiljerram> No, I can't.
<neiljerram> (I guess because the Juju post-boot setup on the machine has removed the IP address from eth0.)
<jose> oh, oh, you can't
<jose> hmm... so the bridge is missing. I'd say we have to manually create the bridge
<jose> I don't know if someone around knows how to do it, I don't tbh
<neiljerram> ok - well I have to go now, but thanks for looking at this!
<jose> no prob! let us know when you're back and I'll see what I can do
<lazyPower> cory_fu - if you have a minute, can you help me wrap my head around why i'm getting a scope error on handling relationship data/states here? http://pastebin.ubuntu.com/15190366/
<cory_fu> lazyPower: It must be that the scope value you're passing in to self.conversation() isn't valid
<cory_fu> OH wait
<cory_fu> You're using RelationBase.get_remote()
<lazyPower> its on an interface that is scope: global on provides, scope: service on requires
<lazyPower> should i refactor those self.get_remotes() to conv = self.conversation() conv.get_remote('thing')?
<cory_fu> You're using self.get_remote() in the requires, which you can't use for anything but global scope
<lazyPower> interesting nuance
<cory_fu> lazyPower: If you're scope is service, it means you can have multiple conversations (one per connected service) so you have to iterate over them
<cory_fu> *your
<cory_fu> lazyPower: https://github.com/chuckbutler/interface-etcd/blob/master/requires.py is GLOBAL scope.  I assume you're changing it to support multiple services?
<lazyPower> nope
<lazyPower> i clearly misunderstood scopes again :)
<lazyPower> and just reverted back to global from service
<cory_fu> lazyPower: Comment added to your PR on the etcd interface layer
<cory_fu> lazyPower: And another comment
<jamespage> beisner, hey - are you ok for a quick +1 on https://code.launchpad.net/~james-page/charms/trusty/neutron-gateway/tox8-test-refactor/+merge/286933
<jamespage> all passing now...
<lazyPower> cory_fu - added a response
<cory_fu> lazyPower: Responded to your response.  ;)
<lazyPower> gah sniped me
<lazyPower> i mean i'm fine refactoring i'm not married to this, just exhausted after trying to get a working cluster after that refactor :|
<cory_fu> The important bit is that the interface layer shouldn't put restrictions on how the charm layer implements things.  Someone might want to create an alternate etcd charm that didn't use a peer relation, or got the port from somewhere other than config, or... something
<cory_fu> Also, you tend to end up with much cleaner code when you properly separate your concerns
<beisner> jamespage, definitely good with the tox8 changes.  passing tests are good.  i don't see anything offensive in the other bits. ^
<stokachu> https://www.irccloud.com/pastebin/81w6oO7Q/
<stokachu> am i missing something with juju2 and lxd?
<rick_h_> stokachu: what ubuntu release?
<stokachu> rick_h_: oh, on trusty
<stokachu> guessing that wont work
<rick_h_> stokachu: there's a few issues with lxd and beta1 and on the email through with the beta notice
<rick_h_> stokachu: yes, trusty among them
<stokachu> ok
<rick_h_> stokachu: also doesn't work with lxd beta3 that just came out due to an api break the team is working on
<stokachu> rick_h_: and also doesn't work with beta4 just fyi
<stokachu> (just found out)
<rick_h_> stokachu: yea, jam is working with tose folks to get it working with master
<rick_h_> stokachu: juju has to play catch up with api changes happening in lxd between beta releases
<stokachu> rick_h_: ok cool, im anxious to mess with it
<rick_h_> stokachu: +1
<lazyPower> rick_h_ - btw days later response to  your statement of cloud credentials - foudn it in the rel notes https://lists.ubuntu.com/archives/juju/2016-February/006618.html
<lazyPower> tvansteenburgh - got another minute?
<lazyPower> tvansteenburgh - pushed ~ 20 minutes ago the working charmbox branch i've cooked up. LMK if theres anything blatantly missing in there you need for QA/CI purposes. bugs as required and i'll get it fixed up
<tvansteenburgh> lazyPower: about the charmbox pr?
<tvansteenburgh> lazyPower: roger, will look in a bit, maybe tomorrow am, in the middle of something atm
<lazyPower> but this should get you on 2.0-beta1 faster than cat in a hat :)
<tvansteenburgh> cool
<lazyPower> understood. thanks mang
<tvansteenburgh> np
<nagyz> is there a way to tell juju-deployer to just use the "normal" (upstream?) charms?
<nagyz> instead of trying to find them locally?
<tvansteenburgh> nagyz: prefix them with 'cs:'
<mux> hey - I'm new to this... I'm interested in the 'storage' charm, but I'm getting "ERROR cannot resolve charm URL "cs:trusty/storage": charm not found"
<mux> I'm sure that's something really simple, like "the charm doesn't exist for this version", but like I said, I'm new
<mux> where can I find details?
<nagyz> tvansteenburgh, ah, stupid me. :) what does cs stand for actually?
<tvansteenburgh> nagyz: charmstore
<tvansteenburgh> mux: looks like the 'storage' charm exists only for precise
<mux> tvansteenburgh: where do I look to find that?
<tvansteenburgh> mux: there are other options though: https://jujucharms.com/q/storage
<nagyz> tvansteenburgh, thanks
<tvansteenburgh> nagyz: np
<tvansteenburgh> mux: i just went to jujucharms.com and searched for 'storage'
<nagyz> one more question: when deploying something using juju-deployer and I have the machines listed in the yaml, is there really a need for +1 machine (for the bootstrapping node)?
<lazyPower> nagyz - juju wont function without the model controller
<lazyPower> so yeah, that +1 is pretty mandatory
<lazyPower> you dont need to define it in the yaml though, as that node is special to juju and its implied
<nagyz> ok so that's something that's always assumed and is outside of whatever I have described in my yaml
<nagyz> right
<lazyPower> yep yep
<nagyz> cheers
<lazyPower> cory_fu - thanks for the interface pointers
<cory_fu> np
<lazyPower> looks a lot cleaner after refactoring to your suggestion(s).
<lazyPower> so, when i get a chance, i really want to pair with you on realtion scopes, and get something put together for a reference
<lazyPower> even if its just a video of you re-telling me the same things you've told me five times that i've forgotten
<cory_fu> Yeah
<lazyPower> it'll keep me from bugging you regardless
<lazyPower> irrespectively
<lazyPower> irregardlessly
<lazyPower> kwmonroe'ly
<cory_fu> 9_9
<magicaltrout> okay random question
<magicaltrout> if I wanted to add more useful mysql backup, like automysqlbackup or something that shunted semi-directly to s3, would it make sense to create a subordinate so as not to pollute the existing charm?
<magicaltrout> I'm thinking about charms like mysql where people have their own preference of backup ideas
#juju 2016-02-25
<jose> it's gonna require some variables and stuff, so I'd say subordinate, otherwise action
<firl> lazPower: howâs it going?
<gennadiy> hi, seems i still have issue with my bundle. it's terrible to disturb somebody to check error log. do you have another method to get error report?
<jamespage> mgz, hey - when you're around I have a bzr fast-import error I need a hand with
<mgz> jamespage: not sure I can help much with that
<gennadiy> could somebody can help with publish bundle error? can somebody review publishing error?
<gennadiy> https://code.launchpad.net/~dataart.telco/charms/bundles/dataart-telco-demo/bundle
<beisner> jamespage, fyi neutron-api/next amulet test failing on:  n-c-c:  Unknown job: nova-api-ec2  @ trusty-liberty.   i'll propose an update at n-api.
<jamespage> beisner, I got that in my last update I think
<beisner> jamespage, even better, thanks!
<beisner> jamespage, hey i've got some control-file type stuff for new review automation that I'd like to place in our github space so they're controllable via PR.  migration-tools, or init a new thing?
<jamespage> beisner, not fussed - up to you
<beisner> jamespage, on that note, shall we have your branch sync stuff live alongside the lp:gh script foo?
<beisner> it seems like a logical place
<jamespage> beisner, sure - I'll move it over in a bit
<jamespage> beisner, http://10.245.162.36:8080/job/charm_amulet_test/521/console
<jamespage> its not ec2 or objectstore...
<beisner> neutron api relation data key missing
<jamespage> hmm
<jamespage> beisner, ok fix pushed
<beisner> jamespage, notice how jenkins-wrapped subprocess buffers sometimes get mixed up, and trace text land somewhere odd?   like that console output ;-)  weird.
<jamespage> beisner, ftr I get two tempest smoke test failures against mitaka-staging on trusty UCA - both todo with telemetry for glance images...
<jamespage> other than that smokes well with my three inflight branches...
<beisner> jamespage, ack, i've not exercised m-staging but lmk when those are ready to run.  are the folks in os-dev who are reporting trouble using staging or updates?
<jamespage> xnox, https://code.launchpad.net/~james-page/charms/+source/openstack-on-lxd-bundle/+git/openstack-on-lxd-bundle
<jamespage> I just created a xenial one specially for you
<jamespage> xnox, it won't work just yet - we have some inflight fixes for mitaka/xenial support that I should get landed today...
<xnox> jamespage, i like! on g+ =)
<jamespage> beisner, I'm promoting proposed -> updates for mitaka - it will break charm amulet tests until the neutron-gateway and neutron-openvswitch charm updates land - on those next
<beisner> jamespage, ack
<jamespage> beisner, I think my n-gteway and n-api updates are inter-dependent - i.e. I can't get one through testing without the other
<jamespage> beisner, disable *mitaka on one first, land, land the other and then re-enable sound ok?
<beisner> jamespage, yes.  alternatively, with things like this in the past, i've deployed the topology ahead of time with the desired charm branches, then just exec  ./tests/0nn-yadda
<beisner> jamespage, i'd really like to move all of the amulet tests to just consume a bundle in the tests/ dir.  then it would be super easy for a user to manually verify something like this.
<beisner> ie.  instead of defining dicts on the fly in the test itself
<jamespage> beisner, well I need the promotion to flush through and then I'll work that
<jamespage> beisner, agreed
<jamespage> beisner, we also probably want to think about a way of defined check tests and gate tests for amulet
<jamespage> beisner, as a thirdparty CI I guess we can see when someone +2's right?
<beisner> jamespage, we should, yes.  right now our sandbox job is testing everything that hits openstack/ci-sandbox and I'm seeing notifications.
<jamespage> beisner, awesome
<beisner> today my priority is to nail down the initial amulet test chain that is triggered by that.  right now it's just a noop "you are here"
<jamespage> beisner, btw is the missing relation between glance and rmq intentional in our bundles?
<beisner> jamespage, seems like we id'd that and resolved at o-c-t.  where are you seeing that?
<jamespage> beisner, next bundle with trusty-mitaka-*
<beisner> jamespage, workload status is satisfied:  http://10.245.162.36:8080/view/Dashboards/view/OpenStack%20Deploy/job/deploy_with_deployer/428/artifact/juju-stat-tab-collect.428/*view*/
<beisner> do we need to make that a required relation in the workload status bits?
<jamespage> beisner, yeah its an optional relation
<jamespage> beisner, its only required when deploying ceilometer...
<beisner> jamespage, looks like we should add that rel to next and default yamls, and have a look at the affected amulet tests.
<jamespage> beisner, ok
<bentzy> hi. how am i supposed to increment my charm version # ? I get everytime <charm_name>-0. I want to be able to run "charm build" and then update the charm by clicking "Change Version" at gui
<magicaltrout> -thanks jose. I think subordinate or a relation of some sort, that way people can plugin different implementations. I'm gonna have a think, it would be cool if, like the monitoring stuff there was a "backup" type relation for different services that different backup platforms could consume
<magicaltrout> 12 hours lag response... not bad ;)
<wesleymason> is there a way of doing "when any" as opposed to just "when all"/"when not any", maybe I'm missing something but wasn't obvious in the reactive docs
<wesleymason> ah, just saw the any_states helper, nvm
<wesleymason> no wait, that doesn't help really
<coreycb> jamespage: could you take a look at these when you get a moment?  it's systemd support for openstack charms deploy from source.  I'll fix up tests post review.
<coreycb> https://code.launchpad.net/~corey.bryant/charm-helpers/systemd/+merge/287110
<coreycb> https://code.launchpad.net/~corey.bryant/charms/trusty/keystone/systemd/+merge/287112
<magicaltrout> i'm supposed to be doing Java dev.... lets write some charms instead \o/
<lazyPower> firl - pretty good
<lazyPower> firl - squashed the ETCD bug if you want to give it a go
<lazyPower> wesleymason - thats a known feature request, to be able to OR rather than AND @when() decorators
<lazyPower> magicaltrout - that's a winning idea if i've ever heard one :)
<magicaltrout> lol
<magicaltrout> maybe
<magicaltrout> cory_fu: kwmonroe https://medium.com/airbnb-engineering/data-infrastructure-at-airbnb-8adfb34f169c#.4z4rc5eqo
<magicaltrout> might be of interest to you guys
<jcastro> magicaltrout: that's a great link, thanks
<magicaltrout> no probs
<jcastro> magicaltrout: where you the one working on a layered gitlab charm?
<magicaltrout> thats me
<jcastro> so I have an idea
<magicaltrout> hit me
<jcastro> so I was thinking when it's ready, we should approach them and like, ask them to highlight it on their download page, etc.
<jcastro> or whereever they put their "solutions" stuff
<magicaltrout> indeed. I have already asked Pentaho if they can highlight the PDI charm on their community pages somewhere
<jamespage> beisner, I'm going to land https://code.launchpad.net/~james-page/charms/trusty/neutron-api/mitaka-b2/+merge/287028
<jcastro> magicaltrout: ooh lmk what they say
<jamespage> its broken due to the b2 updates I just pushed to -updates for mitaka - but other tests passed
<magicaltrout> they'll say yes, me and their community guy have worked together for 10 years :P
<jcastro> magicaltrout: I would be happy to offer anyone who highlights a charm comarketing announcements, etc. on our pages
<jcastro> if that helps your argument, heh
<magicaltrout> but i need to finish the pdi charm and then i want to do a couple of video's to put on their pages so people know how to use it
<beisner> jamespage, yep, looks like the test failure is down to the n-g change dependency
<magicaltrout> its great having stuff available, not great if no one knows how to spin it up ;)
<lazyPower> magicaltrout - hi o/ you said the trigger word video
<lazyPower> magicaltrout - let me know if you want a hand there
<magicaltrout> uh oh
<magicaltrout> my videos' tend to suck, although i was messing around with the idea of a green screen the other day so I can actually appear on the screen whilst running a demo behind me :)
<magicaltrout> I get bored of screencast type demos
<magicaltrout> part of why i like presentations and community meetups as a way to convey information
<jamespage> beisner, ok https://code.launchpad.net/~james-page/charms/trusty/neutron-gateway/mitaka-neutron-changes/+merge/286900 is up next sans mitaka tests
<lazyPower> I understand that sentiment
<jcastro> I think out of all of them I prefer screencasts
<jcastro> because you can be ad-hoc
<jcastro> there's no pressure to like make each thing perfect because it's live, you just roll with it.
<magicaltrout> i watch back my onair hangouts and stuff and end up in tears ;)
<magicaltrout> that said other people like them, so they must float some peoples boat
<lazyPower> magicaltrout  you just hit exactly what i was trying to type out :)
<lazyPower> so, we have this juju channel over on youtube
<lazyPower> and having your pdi video's there would help discoverability
<lazyPower> i think anyway :)  wide net and all that
<magicaltrout> ah yeah
<magicaltrout> i can see you on screen :P
<lazyPower> O_o
<lazyPower> jcastro - can we put a link to the youtube channel in /topic?
* lazyPower changed the topic of #juju to: || Welcome to Juju! || Juju Charmer Summit: http://summit.juju.solutions || Docs: http://jujucharms.com/docs || FAQ: http://goo.gl/MsNu4I || Review Queue: http://review.juju.solutions || Unanswered Questions: http://goo.gl/dNj8CP
<jcastro> drop the summit bit, that should let the channel fit
* lazyPower changed the topic of #juju to: || Welcome to Juju! || Docs: http://jujucharms.com/docs || FAQ: http://goo.gl/MsNu4I || Review Queue: http://review.juju.solutions || Unanswered Questions: http://goo.gl/dNj8CP
<jcastro> it's https://www.youtube.com/c/jujucharms
* lazyPower changed the topic of #juju to: || Welcome to Juju! || Docs: http://jujucharms.com/docs || FAQ: http://goo.gl/MsNu4I || Review Queue: http://review.juju.solutions || Unanswered Questions: http://goo.gl/dNj8CP || Youtube: https://www.youtube.com/c/jujucharms
<magicaltrout> aww shit
<magicaltrout> i can see my face
<magicaltrout> i tried so hard to hide in that photograph
<magicaltrout> talking of presentations, I have a presentation in the summer at NASA JPL which will involve Data Management & Juju. Now if an organisation needed some help doing application modelling, that's one! ;)
<magicaltrout> i suspect they are rather stuck in their ways, but you never know
<lazyPower> I've seen some NASA video's and I've seen Unity in the background. I have a sneaky suspicion they <3 ubuntu and they'll <3 juju just as much
<lazyPower> speaking of which, i wonder if anyone patched the mars rover for heartbleed
<magicaltrout> hehe
<magicaltrout> when writing a layered charm, is there anything stopping me creating a bunch of different rective/*.py stuff so I can dump my state changes in one and hooks in another?
<stub> magicaltrout: You can create as many *.py files as you like in reactive. I'd suggest putting them in reactive/mycharm/*.py though to avoid any possiblity with clashes.
<stub> magicaltrout: But if you want them to be able to import from one another, you currently need a hack.
<magicaltrout> thanks stub not to worried about the importing I don't think, just some logical separation
<jcastro> hey aisrael
<aisrael> yo
<jcastro> let's say we wanted to start doing a LXD/juju vagrantbox but like for 16.04
<jcastro> what's the tldr on getting that to work?
<jcastro> I am thinking since juju has auto-lxd setup on first run
<jcastro> the box is literally just juju and lxd preinstalled
<aisrael> That's totally doable, but we'd need to enlist utlemming to adjust the vagrant build to support it
<aisrael> I think that's the way to go, once we start building the xenial boxes
 * jcastro nods
<jcastro> aisrael: so like, if new juju2 and lxd lands next week, and we got a box out
<jcastro> that would effectively let everyone test it without jumping through the hoops of getting all of that working in trusty
<jamespage> mgz, it might be more of a bzr question - no sure...
<aisrael> jcastro: Yes
<apuimedo> jamespage: Corey Johns nick is johnsca, right?
<cmagina> so, I'm seeing this when using bundletester with the apache-core-batch-processing bigdata-dev bundle http://pastebin.ubuntu.com/15198610/
<jamespage> apuimedo, cory_fu
<cmagina> i'm trying to use the local provider for this
<cory_fu> jamespage: Yes?
<apuimedo> cory_fu: that was me calling
<jamespage> ;)
<apuimedo> I just had the trusty cassandra charm explode on install hook
<cory_fu> apuimedo: Oh, I see
<apuimedo> on maas
<apuimedo> first time it happened
<apuimedo> and I see there's a change from less than an hour ago
<cory_fu> apuimedo: Hrm.  You have the error?
<apuimedo> sure
<apuimedo> http://paste.ubuntu.com/15198649/
<cory_fu> Also, I wouldn't have expected that MP to have ingested into the store yet
<apuimedo> that's true
<apuimedo> it looks like a repos issue actually
<cory_fu> Though it seems it has
<apuimedo> I'll try to re-trigger
 * apuimedo doing that now
<cory_fu> apuimedo: Seems from the error that something else on that unit was running apt-get at the same time
<apuimedo> same issue
<cory_fu> Did you deploy this on a fresh unit?
<apuimedo> cory_fu: it's running on its own lxc
<apuimedo> as allocated by the juju bundle deployer
<apuimedo> and yes, the dpkg lock can't be taken even now
<apuimedo> cory_fu: http://paste.ubuntu.com/15198698/
<apuimedo> cory_fu: should I just destroy the service and lxc and re-deploy the previous revision?
<magicaltrout> hello folks attmepting to mess around with this: http://pythonhosted.org/charms.templating.jinja2/ in my reactive layer but when it runs I get ImportError: No module named 'charms.templating'
<magicaltrout> am i doing something moronic?
<cory_fu> apuimedo: I don't see anything in the MP that should cause a dpkg lock issue.  The only thing that even touches it is switching from a manual fetch.add_source to fetch.configure_sources, but that shouldn't break anything
<cory_fu> And I imagine stub is well out for the day
<lazyPower> magicaltrout pastebin to your source using templating?
<cory_fu> apuimedo: You can try, but TBH, I'm not certain this is related to the MP
<lazyPower> oh this looks new...
<lazyPower> but more or less just an extended port of whats in charmhelpers.core.templating.jinja2
<magicaltrout> ah thats better I knew there was some templating stuff kicking around somewhere
<magicaltrout> thanks lazyPower
<lazyPower> np magicaltrout - lmk if you still have an issue w/ templating w/ that lib
<cory_fu> apuimedo: Did you deploy it with any specific config options set?
<apuimedo> cory_fu: I did
<cory_fu> Can you share those with me?
<mwenning> good morning juju team!    trying to bootstrap an all-vm system, node failing on seed_random module, see pastebin.ubuntu.com/15198749
<apuimedo> sure
<lazyPower> mwenning - troublemaker ;)
<mwenning> mwenning, ees broke :-(
<apuimedo>       options:
<apuimedo>         authenticator: AllowAllAuthenticator
<apuimedo>         cluster_name: midonet
<apuimedo>         install_keys: 7E41C00F85BFC1706C4FFFB3350200F2B999A372
<apuimedo>         install_sources: deb http://debian.datastax.com/community 2.0 main
<apuimedo>         extra_packages: dsc20
<apuimedo> cory_fu: ^^
<cory_fu> apuimedo: Could it be an issue with the datastax PPA?
<apuimedo> I don't know
<cory_fu> apuimedo: Oh.  install_sources is supposed to be a Yaml list (as a string)
<magicaltrout> right, another stupid question, but this is the terminology stuff confusing me more than anything else I think. I want to slap Apache2 over gitlab as an optional relation, so reverseproxy etc to gitlab
<magicaltrout> what the f**k interface do I need to slurp up?
<magicaltrout> https://api.jujucharms.com/charmstore/v5/trusty/apache2/archive/metadata.yaml from that lot
<lazyPower> interface: http
<lazyPower> reverse proxy is a consumer relationship, you just need to provide interface: http on gitlab
<apuimedo> cory_fu: these bundle settings I've been using unmodified for a week or two
<lazyPower> apache2:reverse-proxy should be enough tog et it going, its just going to read in private-address and port.    give it a go with haproxy and apache2
<magicaltrout> hmm right lazyPower, i was reading some stuff in the apache docs about the vhost-config relation but then couldn't figure out how to define it
<lazyPower> ah yeah our apache2 charm is like, geekmode++
<magicaltrout> lol
<lazyPower> IS wrote it and maintains most of it, so its next level on the combination of base64 encoded wizardry
<magicaltrout> yeah i have like a base64 encoded vhost template and shit
<magicaltrout> okay i'll scrub that and look to reverseproxy
<magicaltrout> so in my charm I provide http so apache2 can consume it then?
<lazyPower> should be able to
<magicaltrout> okay cool
<lazyPower> seriously deploy haproxy in front of your gitlab
<lazyPower> if you have http already implemented you're basically done
<cory_fu> apuimedo: Ah, nm.  It should accept a yaml list or a single string
<cory_fu> apuimedo: Let me try deploying with those settings.  Are you trying the previous version?
<apuimedo> not yet
<cory_fu> apuimedo: Ok, I got the same thing.  It's definitely something about those config options and I guess the change to how the sources are loaded.
<apuimedo> cory_fu: so the change modified the source management
<apuimedo> cory_fu: I'll re-deploy with the previous version
<cory_fu> Prev version seems to work fine
<cory_fu> I'm trying to see what it is about the changes, but it looks like it should just work.  :(
<apuimedo> :P
<apuimedo> maybe you can add some amulet test for the install_sources
<cory_fu> The charm does have a lot of Amulet tests, and there are specific tests added for this change.
<cory_fu> All of which are passing
<jose> so, question. when I want to implement an action in a layered charm, should I just add an 'actions' folder in the root folder?
<lazyPower> yep
<lazyPower> you'll additionally need the actions.yaml definition
<jose> all of that in the root, right?
<lazyPower> of your layer, yep
<jose> cool, thanks!
<cory_fu> apuimedo: Ok, I've got it.  The ppa:stub/cassandra and ppa:openjdk-r/ppa are both required (with nulls for the keys).  This config works for me: http://pastebin.ubuntu.com/15199178/
<cory_fu> I think it's probably worth reverting the merge and kicking it back for that, since anyone who had been using that config option is almost certainly going to be broken
<jamespage> beisner, ok with https://code.launchpad.net/~james-page/charms/trusty/neutron-gateway/mitaka-neutron-changes/+merge/286900 ?
<kilty> has anyone had success with deploying juju on a private openstack cloud?
<kilty> I generate the metadate and tools, and begin the bootstrap. It creates a ost instance on my openstack cloud, but it then tells me that RROR index file has no data for cloud {RegionOne http://192.168.4.11:5000/v2.0} not found
<kilty> s\RROR\ERROR
<beisner> jamespage, looks like i, K, L are happy with the changes. having a bit deeper look, sec..
<apuimedo> cory_fu: Agreed
<beisner> jamespage, ok, pretty sure i understand everything going on there.  +1
<jamespage> beisner, thanks
<jamespage> beisner, https://code.launchpad.net/~james-page/charms/trusty/neutron-openvswitch/mitaka-neutron-changes/+merge/286910
<jamespage> that next...
<cory_fu> apuimedo: It's been reverted and should be ingested before too long
<apuimedo> cory_fu: thanks. Much appreciated
<cory_fu> apuimedo: No problem.  Sorry I missed that side-effect and let this through.
<apuimedo> no problem
<apuimedo> now, on to find whatever happened with Puppet. It seems it was also updated and broke my charms...
<jamespage> dosaboy, https://code.launchpad.net/~hopem/charms/trusty/ceph/lp1523871/+merge/286487 lgtm - but please wait for osci tocomplete - not long now....
<jamespage> apuimedo, module breakage?
<apuimedo> jamespage: I don't know. It's a bit strange, it's complaining about a missing hiera variable that I was never setting
<apuimedo> most likely
<apuimedo> some of our puppet module dependencies updated and became more strict about undefined values
<apuimedo> or it's not taking the default
<jamespage> apuimedo, hmm - tricky for repeatable deployments
<jamespage> apuimedo, could you snapshot the modules into the charm in some way to avoid this?
<apuimedo> jamespage: I can do better, I can pin the versions for all the puppet modules
<apuimedo> and be done with it
<jamespage> apuimedo, that indeed would be better
<apuimedo> now I only need to find the culprit :-)
<dosaboy> jamespage: thanks for the reviews
<apuimedo> I think I found it!
<apuimedo> https://forge.puppetlabs.com/ripienaar/module_data
<apuimedo> released today
<apuimedo> looks very suspicious
<apuimedo> I often wonder what makes it so hard for the ruby/puppet community to keep backwards compatibility
<kilty> has anyone had success with deploying juju on a private openstack cloud?
<apuimedo> jamespage: I got it right. Ripienaar fixed the bug ;-)
<apuimedo> I'll try to get our puppet stable module dependencies pinned
<apuimedo> so that it doesn't happen in the future
<firl> you around lazyPower ?
<cory_fu> apuimedo: Can you take a look at stub's reply on https://code.launchpad.net/~stub/charms/trusty/cassandra/ensure-thrift/+merge/279869
<apuimedo> let's see
<lazyPower> firl i iam
<firl> hey man
<firl> Any updates by any chance?
<lazyPower> you bet
<lazyPower> juju deploy ~lazypower/bundle/kubernetes-core
<firl> do i need to compile the bundle again?
<firl> or the layer rather
<lazyPower> nope
<lazyPower> etcd just moved to layers
<firl> need something newer than 1.25?
<firl> ERROR expected a charm URL, got bundle URL "cs:~lazypower/bundle/kubernetes-core-0"
<lazyPower> oh, thats possible yeah
<lazyPower> argh
<lazyPower> new stuff
<firl> :)
<lazyPower> want to trywith new stuff?
<firl> sure, this is a scratch environment
<lazyPower> docker pull lazypower/charmbox:devel
<lazyPower> juju 2.0-beta1
<lazyPower> i'm speaking in half fragmented sentences because i'm excited you showed up. I feel like i have a mountain of candy you've been asking for patiently
<lazyPower> and man, i gotchoo covered
<firl> haha
<firl> let me pull up a vm to do the docker stuffs
<lazyPower> ok. if you want to add devel ppa on your current scratch thing
<firl> thatâs fine too
<firl> I have a build node for when you gave me the layer stuff
<lazyPower> just know that you cant upgrade from 1.25 to 2.0 cleanly *yet*
<lazyPower> and the $JUJU_HOME's are incompatible.
<lazyPower> actually
<firl> it wonât break the juju agent environment with the state machines will it?
<lazyPower> you have the source - github.com/chuckbutler/layer-etcd - is current devel focus
<firl> ya i have that
<lazyPower> which will unblock your k8s work
<firl> ok i need to hit up a meeting, will try this right after
<lazyPower> tip of mbruzek/layer-k8s works out of the box with that replacement etcd
<lazyPower> ack, lmk if you have *any* questions firl
<apuimedo> cory_fu: replied
<firl> lazyPower ok back
<lazyPower> woo
<firl> so I need to add the devel ppa and upgrade the juju CLI locally, correct?
<firl> or should I just build the layer-k8s?
<lazyPower> just build layer-k8s
<lazyPower> additionally, you'll want to deploy cs:~lazypower/trusty/etcd-4, or build from my layer-etcd
<firl> so from a bundle prespective
<firl> this ok? http://pastebin.com/N3YEpVHm
<lazyPower> firl - that looks kind of old compared to what our config is now.. default has bumped to v1.1.7, the generated charm is "kubernetes"
<lazyPower> > charm: cs:trusty/etcd-4 - is the last -stable revision before the connection string revisions. try charm: cs:~lazypower/trusty/etcd
<firl> should I just use: https://api.jujucharms.com/charmstore/v5/~lazypower/bundle/kubernetes-core-0/archive/bundle.yaml
<lazyPower> That'll work :)
<firl> haha
<firl> so should I pull it in via gui / store? or build the layer and deploy
<lazyPower> if you're interested in just consuming the kubernetes charms, pull it in via gui/store
<lazyPower> if you want to help dev, build from layers and beat on it a bit
<lazyPower> i'll keep updating this bundle as i drop features, pre-release style
<firl> gotcha
<firl> canât pull it in via gui
<firl> âfetching the entity failedâ
<lazyPower> argh
<firl> :)
<firl> this is why I am here right? hah
<lazyPower> yeah, i'm depressed. this is due to some new things i've been piloting
<firl> haha
<firl> donât worry man
<lazyPower> if you're on 2.0, you can get at the charms
<firl> I can upgrade
<lazyPower> looks like 1.25 is only serving charms from ingest
<firl> i donât mind
<lazyPower> ok, welcome to a whole new world :)
<firl> haha upgrade instructions?  i need to upgrade the cli, the agent and the state machines?
<lazyPower> so, its an incompatible upgrade right now
<lazyPower> do you have work in 1.25 that you need to keep accessing?
<firl> I have a jenkins setup that I need
<firl> just 2 machines
<lazyPower> ok lets isolate your 2.0 environment
<firl> alright, this could be a pain but lets do it
<firl> last time I tried to create a new juju environment on openstack it had issues
<firl> Creating a new project now
<firl> is there up to date install on openstack instructions?
<lazyPower> one sec, the definitive guide came out as a mailing list post
<firl> I should have it then someplace
<lazyPower> firl https://lists.ubuntu.com/archives/juju/2016-February/006618.html
<firl> so use Xenial?
<lazyPower> nooooooo
<lazyPower> you can do this on trusty just fine
<firl> kk trust still right?
<firl> ok
<lazyPower> first paragraph, its co-installable with 1.25
<firl> I donât care in all honesty, itâs all just vmâs
<lazyPower> update-alternatives --config juju is how you switch between the releases
<magicaltrout> I need a pissing server sat under my desk, stuff this cloud nonsense
<magicaltrout> *yawn*
<lazyPower> magicaltrout - keep it family friendly please :)
<magicaltrout> indeed, I shouldn't mention that C word....
<magicaltrout> alright then 2 hours of expense claiming done. Lets take this http interface for a spin
<jrwren> i'm so confused, i can't tell if the bad c word is cloud or co-installable or config ;]
<magicaltrout> hehe
<magicaltrout> okay so i still don't get this
<magicaltrout> unit-apache2-0: 2016-02-25 21:46:21 INFO unit.apache2/0.juju-log server.go:269 reverseproxy:4: Vhost Template not provided, not configuring: 80
<magicaltrout> i'm guessing thats quite important
<lazyPower> magicaltrout did it work with haproxy?
<magicaltrout> apache2:reverseproxy do I still pass it a vhost file?
<magicaltrout> i've not got that far yet :P
<magicaltrout> let me rip out apache and find out
<magicaltrout> okay so i'm doing something stupid, mostly because i'm trying to learn python at the same time instead of just copying the bash examples :P
<magicaltrout> https://gist.github.com/buggtb/ff52067809a00101d4f1
<magicaltrout> whats up with that?
<magicaltrout> there's no problem with the deployment but I get
<magicaltrout> unit-haproxy2-0: 2016-02-25 21:52:01 INFO unit.haproxy2/0.juju-log server.go:269 reverseproxy:7: No port in relation data for 'gitlab/0', skipping.
<magicaltrout> unit-haproxy2-0: 2016-02-25 21:52:01 INFO unit.haproxy2/0.juju-log server.go:269 reverseproxy:7: No backend servers, exiting.
<firl> lazyPower: having issues bootstrapping 2.0
<firl> not sure what the âendpointâ should be
<lazyPower> one sec, i lost the post
<firl> https://lists.ubuntu.com/archives/juju/2016-February/006618.html
<firl> that one?
<lazyPower> yeah
<lazyPower> looking it over, one sec, i dont use openstack as a substrate often so i'm not as familiar with the config
<firl> â#### Private Cloudsâ section
<lazyPower> ddellav beisner : ping
<lazyPower> firl - if this doesnt yeild good results, i vote to change my answer to "build from layers and deploy"
<firl> haha
<lazyPower> this quickly turned into a ball of yarn. 2.0 was a weekend discovery for me
<firl> lol ok
<firl> so i need to have a 1.25 juju environment and then do a charm build on the git repo right?
<lazyPower> yep
<lazyPower> both git repos*
<lazyPower> layer-k8s, and layer-etcd
<firl> let me clone layer-etcd
<firl> https://github.com/chuckbutler/layer-etcd ?
<lazyPower> thats the one
<firl> ok now?
<lazyPower> with those built, replace both charms with local references, remove the default config option for kubernetes version and you should be in like flynn
<firl> in this file ?
<firl> https://api.jujucharms.com/charmstore/v5/~lazypower/bundle/kubernetes-core-0/archive/bundle.yaml
<lazyPower> yep
<lazyPower> i was thinking your bundle, but mine works
<lazyPower> either/or :) six of one, half dozen of the other
<firl> the other bundle was from matt i think
<lazyPower> oh right on
<firl> http://pastebin.com/APinzAeu
<firl> ?
<lazyPower> when you ls your $JUJU_REPOSITORY, do you see k8's or kubernetes?
<firl> i see both
<firl> let me clean
<firl> kubernetes is the latest from the charm build
<lazyPower> ok, change line 4 to local:trusty/kubernetes
<firl> on it
<lazyPower> firl - also fyi, i think that endpoint setting in the credentials.yaml for your privcloud is the same as the auth-url in the 1.25 config
<lazyPower> your keystone endpoint
<firl> I tried that one
<firl> thatâs why I was asking
<lazyPower> :(
<lazyPower> i'll bring that up and see if i can get you an answer firl
<jamespage> beisner, if you are still around - https://code.launchpad.net/~james-page/charms/trusty/neutron-openvswitch/mitaka-neutron-changes/+merge/286910
<jamespage> lgtm now
<firl> âlocal charms not allowed for service kubernetes: local:trusty/kubernetesâ
<firl> lazyPower
<lazyPower> firl - have you exported JUJU_REPOSITORY?
<firl> http://pastebin.com/CFK6C05U
<firl> i created my local juju repository via sym links to ./trusty
<lazyPower> ah quickstart doesn't work with local urls.
<firl> :)
<lazyPower> juju-deployer does, or the juju deploy command from juju2
<firl> just going to add units
<firl> lazyPower http://pastebin.com/x0bMLf8B
<magicaltrout> https://github.com/OSBI/layer-gitlab/blob/master/reactive/gitlabhooks.py#L14 can anyone see anything stupidly obvious that means that wont install properly, yet when I paste it into debug-hooks it works as expected....
<lazyPower> firl - i'm going to EOD, but if you need anything feel free to ping
<lazyPower> i'll circle back and check in on ya after dinner
<firl> hah I sent a pastebin
<lazyPower> oh
 * lazyPower missed that
<firl> charm build doesnât give a charm that is deployable
<lazyPower> ah
<lazyPower> the etcd layer has 2.0 features in it
<lazyPower> if you delete series: -trusty out of metadata.yaml that'll fix that
<lazyPower> lame :|
<firl> hrmm
<lazyPower> firl if you pull from master i just updated the layer
<firl> for etcd?
<firl> oh i see ya, I did -series: âtrusty"
<firl> WARNING failed to load charm at "/home/ubuntu/work/trusty/kubernetes": metadata: series: expected string, got []interface {}([]interface {}{"trusty", "wily", "xenial"})
<lazyPower> weird, why would k8s be complaining? it does't even have series in metadata.
<firl> yeah
<firl> itâs weird
<firl> it shows up in the metadata output
<firl> cleaning and rebuilding
<firl> http://pastebin.com/NyxPL9Cd
<firl> maybe something is aggregating based upon charm build deps
<lazyPower> its layer-docker that pulled that in
<lazyPower> ok, hang on, thats a breaking update
<firl> haha
<firl> we can do this another time too after you eat
<lazyPower> firl - https://github.com/juju-solutions/layer-docker/pull/27
<lazyPower> i need to land that and i think.. thats the last of the 2.0 features we baked in
<firl> haha
<firl> should I also build that locally from your pr branch or is it pretty quick for the pr
<firl> oh nm i see you already did
<lazyPower> go ahead and pave/rebuild
<beisner> jamespage, ah so good, the theory and the plan came to a pass on 286910  :-)
<firl> how soon for juju 2.0? couple weeks
<firl> ?
<lazyPower> its in beta, i think it lands as -stable in a little over a month
<lazyPower> dont quote me on that though, i actually haven't asked. I've been so deep in using it :)
<firl> added charm to env
<lazyPower> woo \o/
<lazyPower> paydirt
<firl> i can have the etcd charm reside on the same units as the kubernetes units right?
<firl> haha
<lazyPower> err, dont do that
<firl> so 6 nodes
<lazyPower> we're still gutting some stuff from k8s
<lazyPower> yeah
<firl> 3 kube 3 etcd
<lazyPower> k8s launches some etcd containers on the host that we're phasing out
<lazyPower> that was a leftover from the initial import we didnt notice
<firl> to grow +1kube node no need to grow etcd for testing right?
<lazyPower> right
<firl> kk
<lazyPower> and you can pair down etcd to single node
<firl> lol
<lazyPower> you only need to run 3/5 in production scenarios or proper HA testing
<firl> already triggerred 3
<lazyPower> welp
<lazyPower> it supports scale up
<firl> I will remove units after
<lazyPower> but not scale down
<firl> lols
<lazyPower> thats where i'm at now in the fixes
<lazyPower> but to be fair, the old charm had the same behavior
<firl> so I will blow it away before the relation
<lazyPower> yeah
<firl> and do just 1 unit
<lazyPower> ok, off for now, will be back later
<lazyPower> o/
<firl> see ya
<firl> thanks again!
<lazyPower> hey np :) sorry it was a bumpy road getting to this point
<lazyPower> reminder that not everybody is in -beta land with me
<magicaltrout> oh you're fscking kidding me
<magicaltrout> i spend all evening messing around with a website-relation-changed hook not firing and find including the http interface and a @when('website.available')
<magicaltrout> sorts me out in about 30 seconds
#juju 2016-02-26
<magicaltrout> although that seems to be haproxy only and not apache2
<magicaltrout> isn't there a webpage with a list of available interfaces on it?
 * magicaltrout can't wait for the updated docs and stuff to get properly indexed
<magicaltrout> interfaces.juju.solutions that bad boy
<lazyPower> magicaltrout - you hit the nail on the head. that works as expected due to the interface being provided for you
<lazyPower> apache2 being an older charm has no such interface documentation, and its painful to integrate with charms that dont use interface-layers as competing implementations of the same interface exist
<blahdeblah> wallyworld: ping - quick question: do you guys have a standard recommendation for how to manually move a bootstrap node to another system?
<blahdeblah> cc: ^ anastasiamac axw
<axw> blahdeblah: sounds like a job for backup/restore
<blahdeblah> axw: You do know that is one of our trigger words, right? :-)
<axw> blahdeblah: had bad experiences with it? there are known issues, slated for fixing in 2.1 I believe
<axw> blahdeblah: but I think it's the only "standard recommendation" we have
<blahdeblah> axw: Ask wallyworld to tell you the story sometime :-)
<axw> blahdeblah: ah, I think I may know what you're referring to :)
<blahdeblah> So should it basically work on 1.24.7?
<blahdeblah> actually, no; I think it's 1.25.1.2
 * blahdeblah checks
<axw> blahdeblah: I'm not aware of any major bugs with it that would, say, delete all your machines. just usability issues
<blahdeblah> Cool - thanks; I'll have a read of the doco later
<ryotagami> jamespage: I have a quick question, if you have time.
<admcleod1> i have a leadership question.. i have 2 nodes in a peer relation, and both report is_leader false (in the charm, and in a debug hook). apparently both were denied leadership: http://pastebin.ubuntu.com/15204742/
<admcleod1> and http://pastebin.ubuntu.com/15204752/
<admcleod1> hmm maybe looks like https://bugs.launchpad.net/juju-core/+bug/1465307
<mup> Bug #1465307: 1.24.0: Lots of "agent is lost, sorry!" messages <landscape> <regression> <juju-core:Incomplete> <https://launchpad.net/bugs/1465307>
<magicaltrout> right sod it
<magicaltrout> its friday
<magicaltrout> I'm bored
 * magicaltrout rents a cheap server to try Jorge's Xenial blog stuff
<magicaltrout> woop shiny new server with 6TB of storage
<magicaltrout> should keep me going
<magicaltrout> boom xenial here i come
<jamespage> beisner, zul : I raised merges for mongodb and mysql compat for xenial btw
<jamespage> mgz, I guess jelmer is a good person to poke right?
<jamespage> beisner, I'm getting nowhere fast with the bzr fast-import problem
<mgz> jamespage: well, he does know more about the import process than me, though not sure he has any more time
<jamespage> mgz, ok - that's what I thought but its worth a punt
<jamespage> mgz, trying to import git repo's back to bzr - all but one of the 24 repositories works just fine
<jamespage> beisner, I can think of ways to workaround this but they are not pretty
<mgz> and it's ghost revs or some other form of odd history?
<mgz> jamespage: one option would be working out what's odd about the history, redo the git import with that part rewritten, then you'll get a clean import back
<jamespage> mgz,
<jamespage> ABORT: exception occurred processing commit :393
<jamespage> bzr: ERROR: An inconsistent delta was supplied involving '<unknown>', 'havana-20160226144309-u3xhs67nd4l2ygof-220'
<jamespage> mgz, how do I resolve that back to the original bzr export git import?
<mgz> jamespage: not clear just from that, what's the diff of that commit
<jamespage> mgz, http://paste.ubuntu.com/15206137/
<jamespage> hmm - I wonder whether is those symlinks...
<mgz> those do look some odd filemoves
<jamespage> yes agreed
<jamespage> mgz, its like the ordering on those fast-import statements is wrong
<jamespage> http://paste.ubuntu.com/15206157/
<mgz> D templates/havana
<mgz> bet its an issue from git not versioning directories
<mgz> so, we lost the rev info in the export/import roundtrip through git
<jamespage> mgz, I see the same if I just try to re-import the original export directly back into bzr
<jcastro> aisrael: xenial vagrant boxes do exist: http://cloud-images.ubuntu.com/xenial/current/
<jcastro> they're just not listed in on the vagrant page
<aisrael> jcastro: Ahh. I tend to use the vagrant-registered ones because that supports versioning
<mgz> jamespage: so, my prefered solution would be manipulating the original export of merge/mung the problem history
<mgz> so you're clean from there on
<jcastro> indeed
<jcastro> I think it was just an oversight, I let gaughen know
<jcastro> or they may be doing it on purpose because it's a beta
<jamespage> mgz, so the data in the original fast export file ?
<mgz> jamespage: yeah, there are also rewrite options to the fast export command
<mgz> and I think on the git import side
<beisner> o/
<gaughen> jcastro, you're specifically looking for it in the beta? because we're building dailies but it does seem to be missing from beta 1
<jcastro> gaughen: I was just wondering why we're not listing it on vagrantcloud.com
<jcastro> I was assuming because we're not released yet, which would make sense
<gaughen> jcastro, I'll follow up on that specific item
<apuimedo> jamespage: I have the mem-manager with amulet and unit tests
<apuimedo> should I add it to the existing bugs?
<apuimedo> s/bugs/bug/
<apuimedo> for review and promulgation
<jamespage> apuimedo, new charm bug please
<apuimedo> alright
<apuimedo> jamespage: https://bugs.launchpad.net/charms/+bug/1550394
<mup> Bug #1550394: New charm: mem-manager <Juju Charms Collection:New> <https://launchpad.net/bugs/1550394>
<apuimedo> I tried assigning it to you but I don't think I'm allowed to
<apuimedo> I ran the amulet tests with the local lxc provider
<apuimedo> and tried it on ha
<apuimedo> as well
<jamespage> apuimedo, let it run through the normal review process - with my current workload its likely someone else will pick it up forst...
<apuimedo> jamespage: understood
<apuimedo> thanks
<apuimedo> cory_fu: did you merge that cassandra change again?
<apuimedo> (wondering if I should update my bundles)
<cory_fu> Not yet.  I was going to shortly
<jamespage> if anyone has a spare moment - https://code.launchpad.net/~james-page/charms/trusty/mongodb/forward-compat-xenial/+merge/287312
<jamespage> fixes up compat of the mongodb charm with xenialk
<apuimedo> ok, let me know, cause I'll also have to send changes to cs:trusty/midonet-api amulet tests and also cs:trusty/midonet-agent and cs:trusty/neutron-agents-midonet
<apuimedo> cory_fu: ^^
<lazyPower> jamespage - approved and merged
<jamespage> lazyPower, thankyou
<lazyPower> thanks beisner and ci :) the passing results made that a no brainer merge
<admcleod1> jamespage: hey james, did you have any ideas about the leadership stuff i mentioned earlier in here?
<jamespage> admcleod1, missed that
 * jamespage reads backscroll
<cory_fu> apuimedo: Ok, I'm going to merge now.  Link me to the MPs for the test fixes and I'll do those as well
<apuimedo> ok, just a moment
<apuimedo> cory_fu: do you know how to put the three sources on a string for amulet?
<apuimedo> I only got your multiline example for bundle/config file
<apuimedo> should I add "\n - "
<apuimedo> between the different sources
<cory_fu> apuimedo: You can also do "[source, source, source]"
<apuimedo> cool
<pmatulis> 'juju sync-tools' is apparently deprecated in juju-core2 . where do its features go? what does one do if the agents do not have internet access?
<apuimedo> cory_fu: https://code.launchpad.net/~celebdor/charms/trusty/midonet-api/cassandra/+merge/287334
<apuimedo> I'll do the other two now
<magicaltrout> oooh jcastro not putting all your signups on bcc, thats a bit naughty! :P
<stokachu> so juju 2.0 has accounts which list admin@local as the admin user for a controller
<stokachu> however, you can't login to the api with admin@local only user-admin
<stokachu> is the because the api doesn't support the login from accounts yet?
 * magicaltrout returns to bashing his head on an LXD shaped wall
<jcastro> magicaltrout: ack
<jcastro> magicaltrout: it's email, it's all spam
<magicaltrout> hehe
<admcleod1> jamespage: any ideas?
<jcastro> I am hoping the next beta of juju will fix my lxd problems
<apuimedo> cory_fu: https://code.launchpad.net/~celebdor/charms/trusty/neutron-agents-midonet/cassandra/+merge/287336
<lazyPower> rick_h_ SHARED MODELS?!
<magicaltrout> aye well lxd beta2 and juju trunk don't work but I don't know if its me messing something up or elsewhere
<magicaltrout> because when I roll back to alpha1 and stuff it still seems broken
<jcastro> it's a known issue, they're working on a release now
<jcastro> let me find the bug for you
<jcastro> https://bugs.launchpad.net/juju-core/+bug/1547268
<mup> Bug #1547268: Can't bootstrap environment after latest lxd upgrade   <juju-core:In Progress by jameinel> <https://launchpad.net/bugs/1547268>
<jcastro> is what you want
<jcastro> magicaltrout: core tells me new beta early next week with this resolved, so this is the one thing we're waiting on
<magicaltrout> well
<magicaltrout> yes to the api_compat bit
<apuimedo> cory_fu: https://code.launchpad.net/~celebdor/charms/trusty/midonet-agent/cassandra/+merge/287338
<magicaltrout> so I got the various LXD/LXC beta2 packages and installed them
<apuimedo> there, that was the last one
<jcastro> are you on trusty?
<magicaltrout> so i've downgraded which gets rid of api_compat
<stokachu> wallyworld: do you know when logging into the api will use the admin user from the accounts (admin@local) instead of 'user-admin'?
 * apuimedo taking a 30min break
<magicaltrout> I just bought a random server in the cloud and walked through your blog, so i'm sat in xenial
<magicaltrout> but with a downgraded lxc lxd stack
<magicaltrout> so I dont' get the api_compat error
<jcastro> oh, I don't think that would work
<jcastro> jam: ^^ that shouldn't work right?
<cory_fu> apuimedo: Ok, I'll get to them shortly
<magicaltrout> well, I figured that as well, so I folled the juju source right back
<magicaltrout> and it made no real difference
<magicaltrout> the bootstrap node comes up
<magicaltrout> but then can't authenticate against it and fails
<apuimedo> thanks
<magicaltrout> maybe i didn't rollback far enough, I got to 2.0 alpha 1
<jcastro> I think beta2 is the one you want
<jcastro> or did you try that one too?
<magicaltrout> hold on, LXD/LXC beta 2?
<jcastro> no, juju
<magicaltrout> hmm
<magicaltrout> dunno, i'll go dig out a tag
<magicaltrout> beta2? not alpha2?
<magicaltrout> cause it didn't get into 1.25 did it
<magicaltrout> so it ended up on the devel ppa which became 2.0
<jcastro> I'm sorry, I meant juju _alpha_2
<magicaltrout> lol
<magicaltrout> okay
<magicaltrout> building... lets see what she does
<magicaltrout> na exactly the same connection refused error
<magicaltrout> which makes me wonder what on earth its doing
<tvansteenburgh1> frankban: when you have time can you put the latest juju-deployer from pypi into juju/stable please?
<frankban> tvansteenburgh: sure, I'll do that next week
<tvansteenburgh> frankban: ta
<stokachu> anyone on the juju team able to answer my question(s) to the mailing list?
<alexisb> stokachu, yes, I would like wallyworld to answer that one, the short answer is yes it will be there but target depends on progress for the new bootstrap/controller work his team is doing
<alexisb> stokachu, I will loop him into the thread to make sure he sees it
<stokachu> alexisb: ok perfect thank you!
<firl> you around lazyPower ?
<lazyPower> firl you betchya
<firl> got it up and running
<lazyPower> awww yeaaaa
<firl> had a questiona bout services though, how to get an external ip mapped to them
<lazyPower> We've been talking through this ourselves, mbruzek and I
<lazyPower> we had some success with consul as service discovery and putting that behind a reverse proxy
<lazyPower> the other option is ot launch a pod with host port mappings which expose them on the network of the machine, like docker run -p 80:80 style
<mbruzek> firl: TL;DR; it is hard
<firl> lol
<firl> what about openstack
<firl> make it even harder? ;)
<firl> what about implementing the tcp load balancer service?
<lazyPower> i like this idea
<magicaltrout> the talk of loadbalancers and reverse proxies are banned in this channel
<lazyPower> firl - i took a look here http://kubernetes.io/v1.1/docs/user-guide/services.html#type-loadbalancer, see the subsection about External IP's
<lazyPower> looks like we just pass it config and the kubes-router does all the iptables magic
<firl> ya
<firl> itâs up to the kubernetes implementation to be able to implement it
<lazyPower> ok so, lets talk through this - thats an integration with neutron right?
<firl> magicaltrout :)
<firl> lazyPower ya
<firl> juju has access to the networking id's
<firl> juju already does this for the maas / lxc implementation also right?
<lazyPower> yeah, but this is also apples/oranges too, juju has deep integration with lxd/lxc
<lazyPower> docker is only being modeled by the charm, so all that is on us, and how our charm talks to the components, so the comparison there was a bit off in terms of whats being given automatically
<firl> well my thought is, doesnât juju also expose the network side of openstack to the juju subsystem
<firl> if juju had a way to map an IP address from neutron to the container, you can easily forward it
<lazyPower> yeah, i haven't done a lot with networks spaces *yet*
<lazyPower> but its there in 2.0
<firl> juju already controls the security side
<lazyPower> right
<lazyPower> have you looked at juju network spaces docs yet?
<firl> for 2? no I haven't
<lazyPower> we should look at that and figure out how to do this :)
<firl> I have seen some stuff for LACP
<firl> I think that would be one of the right ways
<firl> the other thing you could do is just implement the service to spawn up a haProxy charm
<lazyPower> Right
<lazyPower> there's also vulcand, nginx w/ consul-template or etcd/confd
<firl> i think vulcand has done more with kube also
<firl> ( If I remember correctly )
<lazyPower> I think it really depends on the workload, and this is going to take a few bundles to get the right options together
<lazyPower> we looked into this before, and our best success was with the reverse proxy and template watchers
<lazyPower> but thats been 6 or 7 months ago
<firl> template watchers being when the kube services change?
<lazyPower> yeah, as they come up, down, etc.
<firl> +,- pods?
<firl> kk
<firl> yeah I saw a great article on it
<lazyPower> the containers registered in consul, and consul-template was rewriting an nginx config
<firl> I canât seem to find the article, but ya essentially what you have mentioned
<firl> so in the mean time until that gets resolved / figured out. how should I create a mapping? create a private subnet route to the network server and do iptables to map to the internal ip?
<mbruzek> firl: That looks like it would work, if you get that working I would love to read more about it
<firl> itâs easier for me because I have pfsense as the backend and can do that via simple routes
<firl> but that solution doesnât lend itself to most people
<lazyPower> I hear ya firl
<firl> I think the proper solution would be to leverage the juju networking stack
<firl> for exposing kube services, however being able to do ssl termination of load balancers would be a nice add with a juju bundle
<bdx> core, charmers: I'm experiencing behavior I can't understand or explain when deploying to the openstack provider using 1.25.3 released/released .... when I `juju deploy postgresql`, my machine goes into error state. juju-env -> http://paste.ubuntu.com/15209811/ , machine-0.log -> http://paste.ubuntu.com/15209834/ , nova-api-os-compute.log -> http://paste.ubuntu.com/15209842/
<bdx> core, charmers: but when I `juju deploy postgresql <anyname other then postgresql>` I have successful deploys .... has anyone heard of anything like this?
<bdx> core, charmers: my machine-0.log is also getting spammed with "2016-02-26 20:49:17 ERROR juju.rpc server.go:573 error writing response: EOF"
<bdx> core, charmers: I can reproduce this to no end -> http://paste.ubuntu.com/15209967/
<firl> lazyPower is there a way to specify what internal networking namespace to use?
<firl> ( I see it )
#juju 2016-02-27
<deanman> Just built juju for the first time and tried issuing a `juju init` but it fails with `ERROR unrecognized command: juju init`. Any hints?
<Tweakerrobin> heyhey sorry i got a noob question is juju totally for free
<Tweakerrobin> ?
<Tweakerrobin> I like to instal owncould with juju
<rick_h_> Tweakerrobin: sure thing
<rick_h_> Tweakerrobin: just costs the hardware you run it on
<marcoceppi> Tweakerrobin: just as free and open source as Ubuntu itself
<Tweakerrobin> Tnx Happy to know
#juju 2016-02-28
<deanman> Just checkout juju and build it. For some reason though `juju init` gives `ERROR unrecognized command: juju init`. Any hints?
#juju 2017-02-20
<Ankammarao> HI , frankban
<Ankammarao> charm release giving error "ERROR cannot release charm or bundle: unauthorized: access denied for user "achittet""
<Ankammarao> is this beacase of user issue , please help to resolve this issue
<deanman> axw, ping!
<axw> deanman: pong?
<deanman> axw, thanks for looking into this bug :-). I still get the same error -> http://paste.ubuntu.com/24032595/
<deanman> i need juju build as well >
<deanman> ?
<deanman> built*
<axw> deanman: try passing "-u" to go get, to pull the latest source
<axw> deanman: latest source for juju-tools that is
<deanman> ok got i!
<deanman> axw, do you need to have an lxd image running already or you could also do `juju deploy <centos7charm>` and it will work the same?
<axw> deanman: it should not need to add-machine first
<axw> deanman: juju deploy centos7-charm should work just as well
<kjackal> Good morning Juju world!
<frankban> Ankammarao: hey, what's the output of "charm whoami"
<Ankammarao> frankban : User: achittet Group membership: ibmcharmers
<frankban> Ankammarao: and what command are you using for releasing the charm?
<Ankammarao> frankban : charm release cs:~ibmcharmers/xenial/ibm-db2-0
<Ankammarao> frankban : with resource charm release cs:~ibmcharmers/xenial/ibm-db2-4 --resource ibm_db2_installer-0
<frankban> mhilton: could you please help? ^^^ ibmcharmers have write permission on the charm, achittet is part of ibmcharmers but gets: "ERROR cannot release charm or bundle: unauthorized: access denied for user "achittet"" when trying "charm release cs:~ibmcharmers/xenial/ibm-db2-0"
<frankban> Ankammarao: could you please try to log into jujucharms.com as achittet ?
<Ankammarao> frankban: You're logged in as achittet
<Ankammarao> frankban : logged in to jujucharms.com but no use , still i am getting the same issue
<mhilton> Ankammarao, hi, what does charm whoami tell you? does the list of groups include idmcharmers?
<frankban> mhilton: already asked, see above: User: achittet Group membership: ibmcharmers
<Ankammarao> mhilton : yes listed include ibmcharmers
<mhilton> frankban, Ankammarao: OK It looks like that charm is promulgated (at least the trusty one is) The current behaviour is that only members of ~charmers can publish promulgated charms to the stable channel (frankban please correct me if I've got that wrong)
<frankban> mhilton: ah, yes, we should probably return a better error message
<Ankammarao> mhilton : that charm ealrier pushed to charm store with different user and series(trusty)
<Ankammarao> and now i am trying to push it for xenial series
<mhilton> Ankammarao, yes the charmstore applies the same rules to all charms that have the same name and owner. The different series is just seen as a variation of the same charm. you should be able to publish to any other channel (such as development or edge). In order to publish to the stable channel however we will need to find somebody who has the permission to do it. frankban do you know who is allowed off the top of your head?
<frankban> marcoceppi ^^^
<Ankammarao> mhilton : even i have tried with the same user who pushed to the trusty charm earlier , but we are getting the same error
<kjackal> Hi Ankammarao, frankban, mhilton, If you have ibm-db2 promulgated you should submit a new revision request to push an update
<mhilton> Ankammarao, yes it's nothing wrong with your new user, just bacause the charm is deemed as being in the global namespace (promulgated) only trusted users are allowed to publish new ones.
<mhilton> Ankammarao, depending on what you are trying to do (for example if you want to test the charm) you can still publish to other channels. see the --channel flag on publish.
<kjackal> Ankammarao: frankban: mhilton: since the promulgated charm is released from the stable channel from your namespace, non-charmers lose any access rights they have on that channel (stable)
<frankban> thanks kjackal
<Ankammarao> mhilton, so now i can push the charm to other channels like edge
<mhilton> Ankammarao, yes it's only the stable channel that becomes restricted on promulgated charms.
<Ankammarao> mhilton, i pushed to cs with edge channel and now i am unable to view the charm in the charm store
<Ankammarao> it still showing the trusty charm only
<mhilton> Ankammarao, I think it is this one https://jujucharms.com/ibm-db2/xenial/4, IIRC the search will only return charms in the stable channel, as that's what most users are likely to want to find.
<mhilton> Ankammarao, If the charm is ready to be pushed into the stable channel then you'll need to submit a new revision request as kjackal said earlier (although I don't know how you do that)
<kjackal> Ankammarao: mhilton: Submitting a new review request is done through this interface: https://review.jujucharms.com/reviews
<mhilton> kjackal, thanks
<Ankammarao> mhilton , kjackal, frankban : thanks for your help
<kjackal> Ankammarao: what is the charmname+revision of the charm you pushed to edge?
<Ankammarao> kjackal : charmname = ibm-db2, charm-revision=4
<kjackal> Ankammarao: this charm seems to be xenial only (https://jujucharms.com/ibm-db2/xenial/4) is this what you want?
<Ankammarao> kjackal: yes, exactly
<kjackal> Ankammarao: Wouldn;t you prefer to have a multiseries (trusty and xenial) for your promulgated charm?
<kjackal> Ankammarao: Are you dropping the support for trusty?
<Ankammarao> kjackal : no we need for both series
<kjackal> Ankammarao: Have you considered the series field in your metadata.yaml? You can set there series: ['xenial', 'trusty'] so you get the same charm available in both trusty and xenial
<kjackal> https://jujucharms.com/docs/2.0/authors-charm-metadata
<Ankammarao> kjackal, yes i have mentioned both series in the metada.yaml
<kjackal> Ankammarao:I am not sure why I see the charm only for xenial. I would expect to see both xenial and trusty as supported platforms.
<kjackal> Ankammarao: Have alook at this charm: https://jujucharms.com/apache-hadoop-plugin/trusty/15
<kjackal> Ankammarao: Right after "By bigdat-charmers" you see both trusty and xenial
<Ankammarao> kjackal: i would have push to both series , i will try to push it again for both xenial and trusty
<kjackal> Ankammarao: if you have a multiseries charm you do not need to specify a sieries when you do the push
<kjackal> Ankammarao: it should eb just "charm push "
<Ankammarao> kjackal: ok, got it thank you
<magicaltrout> how are you   .
<magicaltrout> nice
<magicaltrout> stupid lag
<magicaltrout> how are you supposed to move resources around, jujuresources.install() ?
<magicaltrout> https://github.com/USCDataScience/sparkler/blob/master/sparkler-deployment/juju/sparkler/reactive/sparkler.py#L34 I have that
<magicaltrout> and https://github.com/USCDataScience/sparkler/blob/master/sparkler-deployment/juju/sparkler/resources.yaml#L2
<magicaltrout> and it doesn't do anything
<magicaltrout> no errors or execution statements in the logs
<magicaltrout> just silent failure
<magicaltrout> kjackal: any idea?
<kjackal> hello magicaltrout, reading... just a sec
<magicaltrout> cory_fu should be awake by now, he can shed light on it :)
<kjackal> magicaltrout: I think its public holidays in the US
<magicaltrout> pfft
<magicaltrout> stupid americans
<kjackal> So... we are not talking about juju resources
<kjackal> these are "old style resources"
<magicaltrout> er right
<magicaltrout> i googled juju charms resources
<magicaltrout> thats what i got :)
<magicaltrout> but, it also want something thats not bundled with the charm
<magicaltrout> of cource I could just download it using standard python, i was just assuming I was sticking with the current ecosystem
<magicaltrout> cource/course
<kjackal> magicaltrout: do you want to give juju resources a try? Here is how they work: https://jujucharms.com/docs/2.0/developer-resources
<magicaltrout> i have to bundle the resource with the charm though don't I kjackal ?
<kjackal> You essentialy declare the resources in metadata.yaml and then at deployment time you can do a juju attach resource
<magicaltrout> no then :)
<magicaltrout> I'll just grab it using standard python stuff
<kjackal> You have to upload a default (potentialy dummy) default resource
<kjackal> So for the "old style" resources have a look here: https://github.com/juju-solutions/layer-apache-zeppelin/blob/master/lib/charms/layer/apache_zeppelin.py#L86
<kjackal> Which I think is what you are doing
<magicaltrout> don't worry about it, i have that stuff, but the install does nothing
<magicaltrout> i'll just use urllib
<kjackal> https://github.com/juju-solutions/jujuresources/blob/master/jujuresources/backend.py#L106
<kjackal> magicaltrout: is it possible the verification is failing?
<magicaltrout> maybe its not exactly high on the debug log entries though is it :)
<magicaltrout> don't worry about it kjackal we'll move to new resources eventually
<magicaltrout> I'll still with urllib
<magicaltrout> stick
<kklimonda> can I change LXD image server?
<kklimonda> it seems to be defaulting to cloud-images.ubuntu.com
* lazyPower changed the topic of #juju to: http://summit.juju.solutions || https://review.jujucharms.com/ || https://jujucharms.com/docs/ || http://goo.gl/MsNu4I || Youtube: https://www.youtube.com/c/jujucharms
<bdx> magicaltrout: https://gist.github.com/jamesbeedy/c5c2dc96dc517e03ebbde44da5fffa8f
<eeemil> Hello, does anyone know how MaaS relates to "juju expose" and/or how I can set up networking on MaaS for enabling expose/unexpose functionality?
<bdx> eeemil: good question, I'm not sure the 'expose' command does anything on the maas provider ....
<bdx> eeemil: maas doesn't control any kind of port based security, so I'm guessing the 'expose' commnd doesn't have any function there
<bdx> is cross-model-relations available under a feature flag somewhere?
<kwmonroe> bdx: rick_h probably knows ^^, but in case he's out for the holiday, you might try asking in #juju-dev as well.
<rick_h> bdx: yes, in 2.2 though. The in dev stuff.
<rick_h> bdx: there might have been some bits in 2.1 but you want to stick around trunk.
<erlon> folks, how do I make the bootstrap instance to use a proxy?
<erlon> Ive used all proxy, since setting env to using juju model config, but I still getting this error: http://paste.openstack.org/show/599699/
<bdx> rick_h: yeah ... just looking forward .... I want to prep my charms for XMR :-) :-) :-)
<rick_h> bdx: heh, good news is should need anything. Just support the relation endpoints.
<bdx> rick_h: so, if I pull down 'develop', its not behind a feature flag or anything?
<bdx> rick_h: how do I "support the relation endpoints"?
<kwmonroe> erlon: there's bootstrap config and model config.. for my proxy madness, i do this to get 'em all set:
<kwmonroe> juju bootstrap --config http-proxy=$http_proxy --config https-proxy=$https_proxy --config no-proxy=$no_proxy --model-default http-proxy=$http_proxy --model-default https-proxy=$https_proxy --model-default no-proxy=$no_proxy localhost
<kwmonroe> erlon: above, i have $[http|https|no]_proxy eniron vars already exported
<erlon> kwmonroe: right that is for setting both the bootstrap and the model configs right?
<erlon> kwmonroe: is there a way to set the proxy after the bootstrap is created?
<kwmonroe> erlon: you could try "juju switch controller; juju model-config [http|https|no]-proxy=foo"
<erlon> kwmonroe: I only have 1 controller
<rick_h> bdx: sorry, off today running errands so erratic IRC times. I think it's still behind a flag in develop, and by support the relation endpoints I mean if the relation works in a model it'll work across them.
<erlon> kwmonroe: ill try to create another controller using this command you suggested
<rick_h> bdx: no special charm-fu at this time.
<kwmonroe> right erlon, "juju switch controller" means "make the bootstrap/controller model active".  and then the subsequent "juju model-config x=y" would set the config on that active model.
<erlon> kwmonroe: does this no-proxy configuration adds the juju-controller net automatically or I have to manually set?
<bdx> rick_h: no worreies
<bdx> rick_h: lets say I have mysql in model foo, and wordpress in model bar .... so `juju add-relation foo:mysql bar:wordpress` ?
<kwmonroe> erlon: i manually set it in my env to the lxd subnet... it's ugly, but i do this in my .bashrc:  export no_proxy=`echo localhost 10.44.139.{1..255} | sed 's/ /,/g'`
<kwmonroe> where 10.44.139.x is my lxd subnet
<erlon> kwmonroe: if a run juju model-config  with no arguments it shows the list with the parameters that I need there, I believe that is enough to guarantee that it is using what I set right?
<rick_h> bdx: basically yes. I think the model name on the local item can/is left off.
<bdx> ahh, nice
<kwmonroe> right erlon - if 'juju model-config' shows the right stuff, that's all you should need.
<erlon> kwmonroe: that is in the juju client container? or inside the boostrap?
<kwmonroe> erlon: that's from my laptop's term (my juju client).  here's my full rundown:  http://paste.ubuntu.com/24034672/
<erlon> hmmm, kwmonroe, nice
<eeemil> bdx: Thanks for the answer!
<bdx> eeemil: np
<bdx> rick_h: http://paste.ubuntu.com/24034870/
<bdx> I guess I'm not sure exactly what the syntax is there
<bdx> rick_h: an application doesn't seem to know about applications in other models via juju-gui either  https://imgur.com/a/ILxcX
<bdx> possibly it is behind a flag?
<rick_h> bdx: yes behind the flag.
<rick_h> Bdx the colon is the correct syntax
<bdx> rick_h: whats the flag?
<rick_h> bdx: let me check
<bdx> rick_h: possibly "consume"? https://github.com/juju/juju/commit/73924d924b49b501b0249a612741f922d46224be
<bdx> and/or find-endpoints
<rick_h> bdx: see pm
<erlon> kwmonroe: maan, love you!
<erlon> kwmonroe: thanks very much!
<erlon> kwmonroe: are you from ubuntu?
<kwmonroe> erlon: glad it helped!  i do work for canonical, but my parents might take offense if i said i was 'from ubuntu' ;)
<kwmonroe> erlon: you're not the first to get tripped up by the bootstrap vs model proxy settings.. i'll write something up for https://jujucharms.com/docs/2.0/models-config in case others could benefit from an example.
<erlon> kwmonroe: ow,, sorry haha, did know that difference
<erlon> kwmonroe: hmm, that would be nice. I would suggest that. Another thing that would be very valuable, is a global view of how the ubuntu deploying tools works
<erlon> kwmonroe: like, autopilot, juju, maas, etc,
<erlon> kwmonroe: it is very hard to find a way to put all those pieces toguether
<kwmonroe> ahh, yup erlon, i'll make a note of that too for our docs.  thanks for the feedback!
<erlon> kwmonroe: welcome
<erlon> kwmonroe: for example, now the maas says the the nodes are deployed, but what do I do now? haha, do you know any link for a how to with juju? I was following an certification guide that doest not explain this part
<kwmonroe> erlon: you lost me at "what do i do now?".  if juju is bootstrapped with your maas env, you can just start deploying charms/workloads, as described here: https://jujucharms.com/docs/stable/getting-started#deploy-applications
<kwmonroe> erlon: can you link me to the certification guide you're following?  maybe that would help me understand what you're up to.
<erlon> kwmonroe: , hmm, sorry, I went ahead of what you said, I did that, deployed the controller, and then after run a charm 'juju deploy cs:bundle/openstack-base-48 --show-log'
<kwmonroe> ah - cool erlon!  did the deployment show any errors?
<erlon> kwmonroe: Canonical & Juniper Contrail Lab Solutions Guide Prepared
<erlon> kwmonroe: I received from our allience partner at Cananical
<erlon> kwmonroe: theres no errors
<kwmonroe> erlon: how about "juju status"?  does that look ok?
<erlon> kwmonroe: hmm, interesting
<erlon> kwmonroe: hold on
<erlon> kwmonroe: http://paste.openstack.org/show/599709/
<erlon> kwmonroe: vms (4) have only 4GB of ram, I wonder if that is enough
<kwmonroe> erlon: you have reached the limit of my knowledge -- i'm really not sure about vm/ram reqs for the openstack-base bundle.    would you mind joining the #openstack-charms channel, posting your paste, and asking if your env meets the minimum specs?
<erlon> kwmonroe: sure, I will, but juju status revealed a lot of useful information
<erlon> kwmonroe: like, where will be the dashboard
<kwmonroe> ah yes erlon -- 'juju status' is definitely the first place to check for details about a deployment
<julen> Hi there! Is there any tutorial for installing juju on a server which already has MaaS on it?
<julen> The "juju generate-config" is not there anymore, and everything I find is build on LXD. But following the instructions I get conflicts between LXD and MaaS (as they both run on the same machine)
<erlon> kwmonroe: is there a clean way to stop all and re-start? there are some services that seems to be stuck: http://paste.openstack.org/show/599710/
<erlon> kwmonroe: or should I wait more?
<kwmonroe> erlon: checkout "juju debug-log --tail"
<kwmonroe> erlon: if that doesn't produce any output for a few minutes, then it might be stuck.  if it looks busy, i'd wait more.
<erlon> kwmonroe: yeah, still running, but Ill delete the deployment increase the memory and restart again after it finishes
<kwmonroe> erlon: you don't need to be nice to it.  you can do "juju destroy-controller cloud-name-hds-4 --destroy-all-models" to tear it down even if it's not done.
<kwmonroe> it doesn't feel pain :)
<erlon> kwmonroe: hmm, nice, much better :)
<erlon> kwmonroe: Im so happy it went off that I dont want to hurt it lol
<kwmonroe> one thing to mention erlon
<kwmonroe> i'm assuming you don't have any other models on the cloud-name-hds-4 controller
 * kwmonroe should have asked that earlier
<kwmonroe> the --destroy-all-models flag will tear down the entire controller.. just fyi.
<erlon> hmm, no I dont
<erlon> kwmonroe: hmm, it seems it have finished now, what is the url of the dashboard?
<erlon> kwmonroe: it says dashboard is in: openstack-dashboard/0*    active    idle   3/lxd/2  192.168.51.78   80/tcp,443/tcp  Unit is ready
<erlon> kwmonroe: but I only can see a standard apache welcome
<erlon> 192.168.51.78/dashboard also has nothing
<erlon> kwmonroe: wait, nevermind, log says there still things being configured
<kwmonroe> erlon: not 100% what URLs are available to you, but the openstack-base readme makes me think it's http://192.168.51.78/horizon - see towards the end of this page: https://jujucharms.com/openstack-base/
<erlon> kwmonroe: hmmm, there it is the requirements, sorry about filling you up with questions, own you a beer
<kwmonroe> :)  np erlon
<erlon> kwmonroe: have you been to the OpenStack summit?
<kwmonroe> i haven't erlon.  should i go to the next one to cash in on my free beer?
<erlon> kwmonroe: haha, I asked because beers don't  have much value in the Summits, they are usually given at free for everyone, you should definitely go
<kwmonroe> well, "free for everyone" is not the same as "free, hand-delivered by erlon".  i'll mark my calendar :)
<erlon> kwmonroe: owww nice!! Ill be in the one in Sidney, meet me there :)
<erlon> kwmonroe: one more question and I let you live, the bubble page says me to source novarc, but I don't know where juju save those scripts
<kwmonroe> ah yes erlon, i see that too.  just above the section where it says "source novarc" it asks you to "download this bundle ... all commands are executed from within the expanded bundle".  to do that, see the upper-right box on that page -- the "Files" box, and click the "Download .zip" link to grab the bundle.
<kwmonroe> erlon: alternatively, wget/curl the link, as in "curl https://api.jujucharms.com/charmstore/v5/openstack-base/archive -o ~/bundle.zip", then unzip it.
<erlon> kwmonroe: hmm, sweet
<kklimonda> I have a problem with deploying openstack charms keystone (so far) is not configured with mysql vip, and not all 3 controller nodes are granted access to keystone db.
<kwmonroe> kklimonda: i'm not sure many openstack charmers are monitoring this channel today (it's a US holiday).  you might have more success in the #openstack-charms channel.
<kklimonda> oh, didn't know there was a separate channel
<kklimonda> thanks
<kwmonroe> np
<andrew-ii> I have a maas node that keeps getting stuck at "pendning" when allocating LXD containers. Is there a way to get LXD debug info when it (almost?) attempts to create a container?
<andrew-ii> The command `cat /var/log/lxd/lxd.log` just shows a half-hearted attempt to get a xenial image and then... nothing ever happens after that.
#juju 2017-02-21
<kklimonda> can I define (and later reference) variables in bundles?
<kklimonda> I have to pass the same gpg key to bunch of services
<anrah> that should be a config-option on some layer / charms
<anrah> If I understood your question correctly
<anrah> for example on layer-apt there is config option install_keys which can be set on bundle-file
<kjackal> Good morning Juju world!
<SimonKLB> anyone know if it's possible to run actions in the virtualenv created by the basic layer?
<SimonKLB> or perhaps in it's own virtual environment and then with it's own wheelhouse
<kjackal> SimonKLB: Have a look here: https://github.com/juju-solutions/layer-cwr/blob/master/actions/build-on-commit
<kjackal> You need to call activate_env found in the basic layer
<SimonKLB> kjackal: perfect! thanks
<SimonKLB> kjackal: do you know how to preserve, for example, sys.argv when activating the venv?
<kjackal> SimonKLB: I do not know that. For sure you can work around this by storing information in a file.
<jacekn_> hello. Any idea when juju 2.1 might be out? Is it days/weeks/months away?
<lazyPower> Ankammarao_ hello. I dont respond to private messages :)
<lazyPower> Ankammarao_ how may I assist you?
<Ankammarao_> i am getting error while publishing the promulgated charm
<SimonKLB> kjackal: https://github.com/juju-solutions/layer-basic/pull/90
<SimonKLB> i think that does it
<Ankammarao_> getting error like "denied access for the user"
<Ankammarao_> lazypower: but i am able to push to other channels like edge,beta
<lazyPower> Ankammarao_ - You can push to any channel that is not stable when your charm is promulgated. Promulgation means you must submit your charm for review (http://review.jujucharms.com) and a ~charmer can promote your edge,beta channel release to stable
<Ankammarao_> lazypower : do i need to mention the charm is promulgated when submit for a review
<kjackal> Ankammarao_: you do not have to but please do
<lazyPower> ^
<Ankammarao_> kjackal, lazypower : ok, thank you
<lazyPower> np
<Zic> lazyPower: HELP... oh, sorry, all is working actually :)
<Zic> (hello :p)
<lazyPower> \o/ YESSSSS
<lazyPower> ITS HAPPENINGGGG
<lazyPower> Zic :) I have new goodies for you
<Zic> Juju tshirts? :p
<lazyPower> nah thats something jcastro does
<lazyPower> i have a new etcd charm
<lazyPower> with new etcd snap stuff
<lazyPower> well its a new old charm :)
<lazyPower> Zic https://github.com/juju-solutions/layer-etcd/pull/77
<catbus1> stokachu: Hi, conjure-up imports ubuntu .root.tar.gz images for novakvm, shouldn't it be ubuntu -disk1.img? or is it the lxd image will work on kvm as well?
<stokachu> catbus1, hmm i dont think so
<stokachu> lemme look at the spells
<stokachu> catbus1, xenial-server-cloudimg-amd64-disk1.img is the correct one right?
<stokachu> catbus1, for now you can do juju config nova-compute virt-type=lxd
<stokachu> catbus1, ill fix the spell so it imports kvm though
<Zic> lazyPower: oh nice, my apt pinning is living its last days so :)
<lazyPower> Zic - thats the idea. channels + snaps will make it a fantastic experience
<catbus1> stokachu: I believe so, that's what the configuration example shows in the openstack-base charm.
<stokachu> catbus1, ok ill fix that in the spell
<lazyPower> Zic - if you feel brave, build from that branch, deploy teh current etcd and upgrade to that assembled charm (juju upgrade-charm etcd --path=./etcd)  and you'll see "the magic" at work.
<stokachu> catbus1, once my tests complete ill push a new version to the --edge channel for you to test
<catbus1> stokachu: I can launch lxd instances fine, though. what's the difference between lxd image and -disk1.img?
<stokachu> catbus1, nova-compute supports both so depending on if you want KVM or LXD as your compute nodes
<catbus1> stokachu: no, I mean, it imports lxd image and I can launch instances with that image on thhe novakvm machines. horizon shows they were launched and running.
<stokachu> catbus1, and you can access them via ssh?
<catbus1> stokachu: I couldn't, but I haven't cleared whether that's a network connectivity issueor not.
<stokachu> catbus1, yea i think horizon may be showing you incorrect info
<stokachu> but yea lemme know if you figure out if its network or not
<catbus1> ok.
<stokachu> that would be a interesting test to add
<stokachu> afaik the default compute node type is kvm
<stokachu> so it shouldnt be able to run lxd containers
<catbus1> ok.
<catbus1> in the mean time I can import the kvm images manually.
<stokachu> catbus1, ok cool, do you need help with that or are you good?
<catbus1> I am good.
<stokachu> catbus1, ok cool, ping me with what you find out, i gotta step away for a few hours
<catbus1> ok
<Zic> it's not really ontopic but if you have any starting points: do you have any experience of SGBD (MySQL, MariaDB, PostgreSQL) inside a K8s cluster without Vitess?
<Zic> I'm looking at this currently: https://github.com/bitnami/charts/tree/master/incubator/mariadb-cluster
<Zic> (but as you see, it's in incubator for now)
<lazyPower> Zic - i myself tend to use lxd for my stateful workloads
<lazyPower> however, if you're looking to run a chart in prod, as you've noticed, they are in incubation. I don't think any databases other than crate have really been identified as a production ready process-container database. Thats not to say you cant run PG containers in prod... just that they haven't actively promoted it as an option.
<Zic> yeah, the only solution I saw is Vitess, but finally, we're on the path to give up this solution
<Zic> the pdo-grpc PHP extension for Vitess's client is not mature for production code
<Zic> we're falling back to the old php-mysqli connector so
<Zic> (yeah, my customer is using PHP for his backend :p)
<Zic> I don't think YouTube is using PHP so I guess it's normal after all
<tvansteenburgh> i need a reality check. anyone know if lxd-in-lxd is *supposed* to work?
<tvansteenburgh> for example, deploy a machine that's a lxd container, then deploy a lxd container to that machine, then deploy a charm to that container
<tvansteenburgh> marcoceppi, lazyPower ^
<lazyPower> tvansteenburgh - i'm pretty sure its going to give you trouble if its expected to be a privledged container
<lazyPower> tvansteenburgh - however we did find that as of the 2.1 update nested lxd seems to have changed behavior. i dont think it works any longer, no.
<tvansteenburgh> lazyPower: :(
<tvansteenburgh> lazyPower: thanks for the info
<SimonKLB> is it possible to list all actions that have been run on a unit?
<SimonKLB> all action ids that is
<SimonKLB> for example if i run an action using amulet, how can i see the result later on if its not logged in the test
<tvansteenburgh> SimonKLB: the action uuid is returned from Deployment.action_do(...)
<tvansteenburgh> or UnitSentry.run_action(...)
<SimonKLB> tvansteenburgh: yea but there is no way to get hold of action ids that have been run before if you dont have any test logs where you printed the ids?
<SimonKLB> for example if i want more historical data from actions that have been run the past week or such
<tvansteenburgh> SimonKLB: yeah with amulet that'll be difficult. what's the use case?
<SimonKLB> tvansteenburgh: right now i just wanted to see the result from an action while debugging my code and i didnt print the action id in the test
<SimonKLB> so i just got curious if the action ids and/or results where stored anywhere so that they could be retrieved at a later time
<tvansteenburgh> SimonKLB: they're not. we could add something that shells out to `juju show-action-status` or something i guess
<SimonKLB> im not speaking about amulet specifically, but rather actions as a whole, i assume they are stored somewhere since you're able to fetch the results at a later time as long as you have the IDs
<SimonKLB> it would be nice to just have a command that could list all actions that have been run on a unit, like `juju action-ids [unit]`
<tvansteenburgh> juju show-action-status
<SimonKLB> ah :D
<SimonKLB> nvm then!
<SimonKLB> tvansteenburgh: btw, have you seen this ? https://github.com/juju/amulet/issues/171
<SimonKLB> it might be another miss on my part, but from the looks of it there doesnt seem to be a way for a test to both keep the model but also upgrade the charm?
<SimonKLB> now the options are, do it all from scratch or run the test on the same charms as before, right?
<marcoceppi> tvansteenburgh: it should work, but you have to have the right profiles, iirc
<tvansteenburgh> marcoceppi: where can i find info that
<marcoceppi> tvansteenburgh: idk, I've been trying to get my lxd containers setup to support it and #lxcontainers is pretty void
<tvansteenburgh> marcoceppi: k
<tvansteenburgh> SimonKLB: gimme a few, otp
<tvansteenburgh> SimonKLB: i think you are right though, there's no good way to do that right now
<tvansteenburgh> SimonKLB: i saw your issue when you submitted it, but i haven't had time to do anything with it
<SimonKLB> tvansteenburgh: okok! it would be nice to have it when youre doing CI on charms - it's not a biggie having to reset, but the tests would run a lot faster if you could just keep it and upgrade the charms that have changed
<tvansteenburgh> SimonKLB: yeah i hear ya
<cory_fu> kjackal: I updated https://github.com/juju-solutions/layer-cwr/pull/92 so the only remaining issue should be the lxd storage
<kjackal> cory_fu: I guess so, I havent had a full test yet
<kjackal> cory_fu: should I try it tomorrow?
<cory_fu> kjackal: Yeah.  Though if kwmonroe or petevg want to help out on reviewing it, I'd like to get it in so you guys can stop giving me merge conflicts.  ;)
<kjackal> :) sounds resonable!
<kwmonroe> i was gonna change the indentation on all the xml jobs real quick.  i'll look at cory_fu's PR after that.
<cory_fu> lol
<zeestrat> Hey juju folks, what's the status of the promulgated nagios charm and its maintenance? Would love to help on https://bugs.launchpad.net/nagios-charm/+bug/1605733 as it's breaking for us (and others), but it's unclear who's involved in the nagios-charmers and how it's maintained.
<mup> Bug #1605733: Nagios charm does not add default host checks to nagios <canonical-bootstack> <family> <nagios> <nrpe> <unknown> <Nagios Charm:New> <nagios (Juju Charms Collection):Won't Fix> <https://launchpad.net/bugs/1605733>
<stormmore> o/ Juju world
<kjackal> hi zeestrat, the metadata.yaml has the maintainers. In addition to contacting them I would suggest you send an email to the juju list stating your intention. The discussions here are a bit ... ephemeral
<erlon> kwmonroe: hey, have a few seconds?
<kwmonroe> sorry erlon, i'm out the door to go pick up my kid for the next few (like 30) minutes.  ask away though and i'll see the message when i get back..
<erlon> kwmonroe: hmm, sure ill leave a question around
<erlon> all: kwmonroe: guys, I have deployed the base-openstack (https://api.jujucharms.com/charmstore/v5/openstack-base/archive/bundle.yaml) charm and some services seem to be very broken (like some services not even configured rabbit or mysql properly), is there a problem with the charm?
<kwmonroe> beisner: icey ^^.  erlon, can you link to a 'juju status' paste?
<erlon> https://www.irccloud.com/pastebin/o2dkfrUN/
<erlon> this is neutro.conf in neutron-gateway https://www.irccloud.com/pastebin/av0fuRhU/neutron.conf
<erlon> This is in nova-cloud-controller https://www.irccloud.com/pastebin/aYIiSWsH/nova.conf
<erlon> *1 is mitaka the other kilo*
<erlon> just for reference a service that seems to be working fine https://www.irccloud.com/pastebin/rWRTvf15/cinder.conf
<beisner> hi kwmonroe, erlon - i'd need to see the juju unit logs from the failed units neutron-gateway/0 and nova-cloud-controller/0 in order to advise;  the bundle is known-good, but certain customizations may be needed depending on the metal that it's laid down on.
<erlon> beisner: those are kvm VMs, the only thing I changed in the charms was 1 - the name of the network interfaces from eth1 to ens1, 2 - the name of the disk from sdb to vdb
<erlon> beisner: its a flat topology, 5 machines coneected to an admin network, and another interface (virtual but bridged to pub) making the public interface
<erlon> beisner: how/where do I get that?
<beisner> erlon, /var/log/juju/unit-* from those two machines/vms
<erlon> beisner: didn't find a way to post the full logs, but here is the error
<erlon> https://www.irccloud.com/pastebin/pPfMlade/unit-neutron-gateway-0.log%20
<erlon> beisner: it seems its the network configuration, but, now, how will I tell juju to bypass the ens4 to the container?
<erlon> beisner: some services it deploys on containers, others in the host VMs, for each of that the neutron interface will be different
<erlon> https://www.irccloud.com/pastebin/7QgsM3tg/unit-nova-cloud-controller-0.log
<erlon> beisner: may be the nova error is related to the neutron: 2017-02-20 21:53:00 ERROR juju.worker.dependency engine.go:547 "metric-collect" manifold worker returned unexpected error: failed to read charm from: /var/lib/juju/agents/unit-neutron-gateway-0/charm: stat /var/lib/juju/agents/unit-neutron-gateway-0/charm: no such file or directory
<erlon> beisner: this is the first error in unit-nova-cloud-controller-0.log
<erlon> beisner: and then after some time those one above
<beisner> erlon, neutron-gateway shouldn't  be in a container with that bundle.
<beisner> erlon, the other failure (cloud-compute-relation-changed answers = dns.resolver.query(address, rtype)) indicates that the machines in your lab can't reverse-resolve themselves/peers.
<erlon> beisner: that reverse resolving should be done by the DNS server in MaaS doesnt it?
<erlon> beisner: hmmm got the neutron gateway error, it is running in a vm with only 1 interface
<lazyPower> stokachu o/
<lazyPower> er
<lazyPower> stormmore \o
<stokachu> lazyPower, \o/
<lazyPower> haha
<lazyPower> eyyyyy
<lazyPower> stokachu - incoming fun here https://github.com/juju-solutions/layer-etcd/pull/77
<stokachu> lazyPower, hah can't wait for that
<lazyPower> if you're feeling froggy, i still need to port the actions but i think thats literally it
<lazyPower> then its all snaps all the time
<lazyPower> i have verified it works on lxd :D
<marcoceppi> lazyPower: make sure you install squashfuse <3
<lazyPower> marcoceppi - LAYER-SNAP ALREADY DOES THISSSSS
<lazyPower> YASSSSSSS
<marcoceppi> YAASSSSSS QUEEENNN
<lazyPower> ikr
<stokachu> lazyPower, made a few comments
<marcoceppi> oh crap, this isn't #eco
<lazyPower> stokachu ta
<lazyPower> appreciate the wip review
<lazyPower> it'll only help when i go to actually land this
<stokachu> :D
<lazyPower> anywho i need to jet, catch a flight in little over an hour
<lazyPower> cheers
<stokachu> safe flight
<cory_fu> petevg, kwmonroe: All of the issues that kjackal hit in https://github.com/juju-solutions/layer-cwr/pull/92 are resolved, if you guys were up for giving it a review
<kwmonroe> excellent cory_fu.. is this deployed on j.d-i.n?  looks like cwr-52 maybe?
<cory_fu> kwmonroe: Yep.  Port 8081 for Jenkins.
<kwmonroe> cool, i'll check it out
<xavpaice> anyone in here working on python-libjuju?
 * xavpaice is trying to ppa build a package that needs it, but it's not yet packaged - wondering who to talk to so I can help out with that
<cory_fu> xavpaice: Hey, I've done some work on python-libjuju, as has tvansteenburgh and petevg.  By package you mean a Debian package?
<petevg> cory_fu, xavpaice: I believe that tvansteenburgh was working on adding it to pypi.python.org, but there's a naming conflict (it's called juju, and so is an older Python lib).
<tvansteenburgh> it's there
<cory_fu> petevg: That's been resolved.  Pypi is correct now
<petevg> Awesome! I'm behind the times, apparently.
<cory_fu> :)
<xavpaice> yeah, it's on pypi and can be grabbed via pip, but I want to put something in a PPA that needs it
<xavpaice> so would prefer to get a deb
<xavpaice> I'm working on putting a source package into a ppa now, just don't want to duplicate someone else's work
<tvansteenburgh> xavpaice: no one else has done it that i know of
<cory_fu> xavpaice: I don't think a deb is being worked on, and I personally have no experience packaging debs but am happy to help out in any way that I can
<xavpaice> awesome - thanks
<tvansteenburgh> cory_fu, petevg: while we're all standing around the water cooler... https://github.com/juju/python-libjuju/pull/56
<xavpaice> xenial package: https://launchpad.net/~canonical-bootstack/+archive/ubuntu/bootstack-ops/+packages
<petevg> tvansteenburgh: left comments.
<tvansteenburgh> petevg: ty
<petevg> np
<cory_fu> tvansteenburgh: I got an error running the add_machines.py: http://pastebin.ubuntu.com/24043000/  I think it's a timing issue but the example probably needs to watch for the container to be fully removed
<tvansteenburgh> cory_fu: interesting, thanks. i didn't hit that
<kwmonroe> cory_fu: containerization looks great.  i'd really like to get multiple cwr jobs fired off at once.  any idea how to do that with 1 jenkins slave?
<cory_fu> kwmonroe: You just have to increase the number of executors, under Manage Jenkins
<cory_fu> I'm not sure how we'd do that in the charm.
<kwmonroe> neat!  what's a good setting?  like 1000?
<kwmonroe> i'll try 2 for now.
<cory_fu> heh
<cory_fu> That should probably be a config option on the Jenkins charm, I think
<kwmonroe> HOLY CRAP cory_fu.  it's working.  it's really working!
<cory_fu> :)
#juju 2017-02-22
<arosales> cory_fu: kwmonroe: http://i.imgur.com/QMMtsAt.gif    :-)
<lutostag> does juju support running against a snapped lxd? I get 'ERROR can't connect to the local LXD server: LXD socket not found; is LXD installed & running?' any way to fix
<anastasiamac> lutostag: wallyworld had some issues running with snapped lxd :) he may b the best person to help \o/
<stokachu> lutostag, export LXD_DIR=/var/snap/lxd/common/lxd
<stokachu> lutostag, see this https://github.com/conjure-up/conjure-up/blob/master/snap/wrappers/juju
<lutostag> stokachu: my man!!! ty
<stokachu> np
<stokachu> lutostag, you can't have both deb and snap lxd installed though as they conflict on port 8443
<stokachu> and thats hardcoded in juju
<lutostag> yeah, I have been fighting a bug with juju2.1rc with bootstrapping, seeing if blowing away lxd will fix (and a time to try lxd snapped on my main machine)
<stokachu> lutostag, i know lxd 2.8 which is in snap stable works fine
<stokachu> anything newer though is broken
<lutostag> stokachu: thanks
<lutostag> axw: fiddlesticks, still can't bootstrap against a wiped lxd
<axw> lutostag: I have no idea what's going on :/  do you have a firewall on your host perchance? ufw enabled?
<axw> lutostag: can you try starting a lxd container by hand, and connecting to 10.232.128.1:8443 with telnet or whatever?
<axw> lutostag: the only other thing I can think of that might be useful is the output of: lxc network show lxdbr0
<kjackal> Good morning Juju world!
<eeemil> How does Juju decide which subnet to use within a specific space? I'm deploying to MaaS where each machine has 2 NIC:s, one NIC is exposed to Internet and one NIC is for internal communication. I want to deploy openstack-base. If I have 2 separate spaces (one external, one internal), Juju seems to become confused. If I have 1 space with both external and internal subnet, I can't communicate with some units as
<eeemil>  they get internal IP:s listed as public...
<cnf> hi, how do i get juju to use a socks5 proxy for connections?
<cnf> or even better a pac file
<stub> cnf: I don't think you can. https://jujucharms.com/docs/devel/models-config has details of the various environment configuration options, but no mention of socks5.
<cnf> stub: that's remote proxies, from what I get
<cnf> i need it locally
<cnf> juju can't talk to the cloud API directly
<bdx> stub: I'm experiencing some odd behavior when requesting the same database from multiple applications, e.g. one application seems to have higher privs then the other
<bdx> stub: is there anything that would cause the privs for a subsequent user to be less than the user that requested the database initially
<cnf> hmm
<cnf> i think setting http_proxy works
<cnf> but then i get
<cnf> ERROR invalid URL "http://cloud-images.ubuntu.com/releases/streams/v1/index.sjson" not found
<bdx> stub: for example, after each app requests access to the database, one of the apps can communicate with postgres just fine, see http://paste.ubuntu.com/24046542/
<cnf> hmm, can I pre-download whatever image it wants from http://cloud-images.ubuntu.com/releases/streams/v1/index.sjson ?
<cnf> because i need to go through a proxy to get to my controller, but not to get the image
<bdx> the other application just barfs on `rails c`, due to postgres acccss http://paste.ubuntu.com/24046554/
<bdx> stub: I see that postgres has created the correct entries in pg_hba.conf http://paste.ubuntu.com/24046557/
<cnf> hmz "getting started" with juju is turning out to be quite a rabbit hole :/
<bdx> and that the database has the correct acceess for each user created http://paste.ubuntu.com/24046568/
<bdx> stub: what is odd, and I'm just noticing this, is that there is a database created for each user as well as a database created with the dbname that I request in each of my charms
<bdx> should the postgres charm create a db named that of each user?
<bdx> alongside the one you request?
<cnf> ugh, so i guess i can't use the vsphere provider
<bdx> this was working for me some time ago, not sure if recent changes may have impact on what I'm seeing here or what the deal is
<cnf> that leaves me with nothing to try it out
<bdx> I had changed some perms around prior to getting that paste of the \l
<cnf> any suggestions?
<icey> I was trying to test some new logic with an upgrade charm and can't eploy local changes: https://bugs.launchpad.net/juju/+bug/1666904
<mup> Bug #1666904: upgrade-charm --switch doesn't work with local charms <juju:New> <https://launchpad.net/bugs/1666904>
<cnf> where does juju store local config, btw?
<jrwren> $HOME/.local/share/juju
<cnf> thanks jrwren
<cnf> now to find a provider i can actually use
<cnf> hmz
<cnf> why would I get ERROR failed to bootstrap model: cannot start bootstrap instance: cannot run instances: cannot run instance: No available machine matches constraints: mem=3584.0 zone=default
<cnf>  ?
<cnf> hmz, I can't get any of the providers working
<cnf> wow, maas is horrible
<cnf> right, 3 hours "getting started with juju", and I got nothing working
<cnf> i'm done for the day
<cnf> can't get any provider working
<jrwren> cory_fu: I have a couple of MP for charm-haproxy. Please take a look?
<lazyPower> cnf - are you really done for the day? if not I can lend a hand in getting you unblocked
<kjackal> kwmonroe: I am merging the jenkins jobs backup and restore!
<cnf> lazyPower: i'm mostly frustrated atm :P
<cnf> got about an hour before end of day
<kwmonroe> ack kjackal - thanks!
<kwmonroe> cnf: just catching up on backscroll.. if you're still hitting the bootstrap error mentinoed above, the default bootstrap requires 3.5G of ram.  you can override that with "juju bootstrap --bootstrap-constraints mem=2G", for example.
<cnf> yeah, that's not going to help. that was me misunderstanding maas
<kwmonroe> gotcha
<cnf> I just want to learn how juju works, but it seems to want cloud stuff I don't have available
<cnf> and I can't get the vsphere one working
<kwmonroe> cnf: sorry this isn't going smoothly!  i don't have vsphere/juju experience, but i can point you to free aws creds:  http://developer.juju.solutions/.  i know you're about to EOD, but perhaps it's worth signing up there so you can check out juju/aws.
<cnf> i'll look at that for learning
<cnf> but i'm mostly interested in juju for setting up openstack
<cnf> if i am understanding things correctly, maas is pretty much the only way
<cnf> but thanks for that link
<kwmonroe> np
<kwmonroe> admcleod: you still around?  what's the best guide for juju/maas/openstack these days?  is it the openstack-base readme (https://jujucharms.com/openstack-base/)?
<lazyPower> cnf - MAAS on vsphere would be your best bet yeah.
<lazyPower> we have quite a bit of testing around that in our OIL lab
<cnf> add vm's to a maas controller?
<cnf> for testing that could work, I guess
<lazyPower> cnf - yep, thats how we do it. You can also skip MAAS and use the vsphere direct https://jujucharms.com/docs/stable/help-vmware
<lazyPower> so you have 2 options there
<cnf> lazyPower: yeah, I could not get that working
<lazyPower> ok so the vsphere provider is what was giving you heartburn attempting to bootstrap?
<cnf> i need to go through a proxy to get to the vsphere api
<cnf> and then it can't get the ubuntu img list, iirc
<lazyPower> can you do me a favor, just so i can get a bug filed if there's a bug in there - juju bootstrap --debug    2>&1  | tee bootstrap.log and pastebin that log?
<lazyPower> ooooohhh ok
<lazyPower> so its a localized setup, that has some restrictions our bootstrap process assumes are not there
<cnf> ERROR invalid URL "http://cloud-images.ubuntu.com/releases/streams/v1/index.sjson" not found
<cnf> it is
<lazyPower> you can set a proxy for the bootstrap process to use, however i do beleive it works as follows: its going to fetch the cloud image to your workstation, and then push that to the bootstrap node (the docs state this) and that might also be troublesome
<cnf> yeah, i can't get the image through the proxy
<cnf> but i need the proxy to get to the vsphere
<cnf> and it doesn't support my pac file, I think
<cnf> and i could not figure out how to download the image manually
<cnf> that's where I got stuck
<lazyPower> cnf - ok, can i get you to file a bug against our docs for this? I'd like to see if we can get you a working path to resolution. The alternative would be to install maas, and then pre-load maas with both images and vm's
<lazyPower> but thats a load of extra setup steps you can forego
<cnf> yeah
<lazyPower> eg: why am i setting up pxe to juju some vms?
<cnf> indeed
<lazyPower> i feel ya, let me get you a bug link
<cnf> the end goal is to evaluate juju as a mechanism for running openstack in production
<cnf> but i'm not there yet
<lazyPower> cnf https://github.com/juju/docs/issues/new -- can you file a bug here, describing the limitations of your setup wrt the vsphere / network limitations?
<cnf> i
<lazyPower> we can see about getting some updated docs cut around those limitations and also ping the right people to weigh in on what you would need to request from your IT staff (if applicable) for allowed proxy domains.
<cnf> hmm, not sure how to word this
<cnf> (end of day, i'm tired and hungry, i'm afraid)
<lazyPower> cnf - however i do believe its just the cloud image archive and the jujucharms.com api is all thats expected.   Charms on the other hand will want hte cloudarchive bits.
<cnf> lazyPower: are you one of the devs? should I mention you in the ticket?
<lazyPower> cnf - I work on the charm ecosystem, but you bet feel free to ping me direct. i'm @chuckbutler on github
<cnf> https://github.com/juju/docs/issues/1676
<cnf> I hope that's a bit clear
<lazyPower> cnf - thanks, acknowledged receipt
<lazyPower> i'll shop this with some of the core devs when they come online and see if we cant get you unblocked
<cnf> cool, thanks
<cnf> lazyPower: from what I understand, the --config sets the proxy on the remote side
<cnf> export http_proxy=http://ip:port/ works though
<cnf> except i can't download the image through the proxy
<lazyPower> ok so confirmed its during the client side image fetch to load the cloud image.
<cnf> ie "The HTTP proxy value to configure on instances, in the HTTP_PROXY environment variable"
<cnf> for http-proxy
<lazyPower> right, the bootstrap controller is going to want proxy access as well to reach the charmstore. I'm looking now for osx proxy settings you can set on the CLI to bypass thsi
<lazyPower> i read that you have browser proxy working, i do beleive there's a way to proxy cli tooling too, i think its with networksetup but its been quite a while since i've done that
<cnf> well, i have a pac file
<cnf> that sets proxy servers differently according to the url
<lazyPower> networksetup -setautoproxyurl "wi-fi" "http://somedomain.com/proxy.pac"
<cnf> yeah, cli tools don't respect that
<cnf> they use the http_proxy env var
<Zic> lazyPower: hi! I just saw Kubernetes 1.5.3 was out since 15th, is it already available through Juju ?
<cnf> (just like on linux)
<lazyPower> Zic - we are a bit behind with 1.6 code freeze. I can kick off a build and run e2e on that today if you want the 1.5.3 bump
<lazyPower> i suspect we can get you an edge by EOD, probably closer to beta/stable by end of week
<lazyPower> Zic - however, this depends on clean e2e results :)
<lazyPower> cnf - ok, and i guess the PAC file changes from time to time so its not really convenient to use the HTTP_PROXY url with a manual config?
<cnf> lazyPower: my main problem is that the vsphere api is behind a proxy, the REST of the internet is NOT
<Zic> was just to know :) if you have any "edge" channel that I can test on preprod I can give you some review with my usage :)
<lazyPower> cnf  sorry for basic questions, just trying to wrap my head around the domain here.
<lazyPower> Zic - ok, lets shoot for later this week then my plate is full today.
<cnf> the pac file is always the same here, just ubuntu.com needs a DIRECT connection, and vsphere needs to go through a proxy
<lazyPower> Zic  i'll try to get you an edge build by tomorrow, if all else fails, friday
<Zic> np :)
<cnf> lazyPower: and i can't have both in the shell
<cnf> and i can't split the commands, i think
<lazyPower> cnf thats unfortunate. I dont have a direct answer right now, let me think on this and see if we cant resolve this with some clever cli fu
<lazyPower> or maybe less-than-clever cli fu
<cnf> :P
<lazyPower> i'll keep up in the bug sinc eyou're close to EOD
<cnf> lazyPower: the only thing i can think of is download the image manually
<cnf> and put it somewhere juju can find it
<lazyPower> does that work for you?
<cnf> sure
<lazyPower> fantastic. Thanks for being patient cnf, i'll try to run down some answers regarding manual image upload and instructing juju what to do with that image
<lazyPower> kwmonroe nice drive by on that bug
<cnf> i responded to that kwmonroe
<cnf> to add context from here to the bug
<lazyPower> rick_h - when you have a sec to glance at https://github.com/juju/docs/issues/1676 it would be good to gather info on if we have encountered split proxy clouds before, and if there's a known path to success here.
<cnf> kwmonroe: i just noticed the no_proxy as an env var
<cnf> kwmonroe: setting that to ubuntu.com gets me to
<cnf> ERROR failed to bootstrap model: cannot package bootstrap agent binary: no prepackaged agent available and no jujud binary can be found
<admcleod> kwmonroe: im here, yeah - depends, what do you want to do with it?
<lazyPower> kwmonroe - do we know if the 2.0.3 release was pre no-more --upload-tools?
<admcleod> kwmonroe: oh. i see
<kwmonroe> oof cnf, i haven't seen that error before.  lazyPower, i dunno the state of --upload-tools in 2.0.3.
<cnf> hmm
<admcleod> there is also the option of doing openstack-on-lxd on a laptop,etc
<lazyPower> i beleive that went away in teh 2.0 release chain but i forget when
<cnf> no lxd on my laptop
<lazyPower> it adopted new behavior where it "just does the right thing" during the bootstrap process.
<cnf> oh, on that
<cnf> might want to hide the lxd options on the osx build
<cnf> doesn't work, anyway :P
<kwmonroe> thx admcleod - cnf was working on an openstack/juju happy-fun-time, so wanted to point him at the most recent docs.  atm, we're working through bootstrapping vsphere with proxies, so we'll need to get that sorted before diving into the 'stack.
<lazyPower> cnf - there's been work in progress to command lxd providers on remote units.
<admcleod> kwmonroe: yep read some of the scroll
<cnf> lazyPower: that would be awesome
<lazyPower> cnf  - so you can point it at a vm and suddenly you have a developer cloud on your laptop :)
<lazyPower> spiffy right?
<cnf> yep
<cnf> i do that with docker things, atm
<lazyPower> prior to that work it was a nasty hack with socat and some tls cert syncs and other fun time stuff
<cnf> uhu
<cnf> how do you communicate with it?
<cnf> a socket file?
<lazyPower> lxd is a restful api
<cnf> (recent versions of ssh support forwarding socket files)
<cnf> i saw lxd intro at fosdem a few years ago
<cnf> hmm
<cnf> lazyPower: well, lxc (as the command) is in homebrew
<cnf> (also, juju doesn't like socks5:// as proxy protocol
<cnf> maybe i should file a bug against that as well
<cnf> ok, as they say, badly translated from dutch, my pipe is out
<cnf> i'm going home :P
<cnf> \o
<lazyPower> cheers cnf
<cnf> thanks for the help
<stormmore> o/ Juju world!
<stormmore> lazyPower, you awake today? ;-) Just noticed my first "issue" with my k8s cluster. kube-dns seems to be in CLBO but the only error I see is https://gist.github.com/cm-graham/bc9ff905ca63b06c393c08e0f33a8e33
<lazyPower> stormmore - thats a new one. can you file a bug for this? it looks like kubedns might be failing health checks
<lazyPower> stormmore - one thing we did notice, is that if you hvae a busy cluster you  might be filling the max_con_table
<stormmore> it is failing health checks, it is how I found it
<lazyPower> stormmore - https://github.com/juju-solutions/bundle-canonical-kubernetes/issues/216
<lazyPower> related bug
<stormmore> yeah this is not a busy cluster (yet!) only running k8s services and 3 in house ones right now
<lazyPower> ok, that doesn't sound like it woudl be the culprit
<lazyPower> Cynerva ryebot - can we add a todo to add an action for re-scheduling the addons? Seems like this clears up a lot of the funky issues we've found with addons
<lazyPower> stormmore - one thing to try is scale the replicas to 0, then re-scale to n
<stormmore> being the self-hater that I am, I suspect something stupid I did ;-)
<lazyPower> stormmore - getting the pods rescheduled seems to unstick whatever the root cause is there
<ryebot> lazyPower: sure thing
<lazyPower> ryebot - ty, in the process of fighting with merge bots :|
<ryebot> np, good luck!
<stormmore> OK lazyPower that seems to have brought it back to a healthy state for now ;-)
<lazyPower> stormmore - ok, let me know if it continues to give you issues, this bit Zic before as well
<lazyPower> if we can identify a root cause for why kubedns is messing its own bed, it would be good to capture that and get a patch submitted upstream
<lazyPower> but i also know its abysmal at giving output as to why its having trouble
<lazyPower> :|
<stormmore> lazyPower, yeah I remember seeing that :-/
<stormmore> still more useful than that 404 I am getting from the internal dev's container! :)
<lazyPower> stormmore - welp. :)
<lazyPower> i can only solve so much with magic and khai
<stormmore> lazyPower, I know this thankfully :) the devs I am working with are a little green (read a lot!) in containers
<stormmore> wow I am really laggy today... going up to 13secs to here :-/
<Zic> lazyPower / stormmore : yep, I had this one, have you try to lurk at the logs directly through the docker container? because I didn't have any useful logs through "kubectl logs"
<Zic> stormmore: in my case, it was the max conn for dnsmask which was reached
<Zic> I just scaled kube-dns and then deleted the kube-dns pods
<Zic> (it respawned to respect the scale deployment)
<lazyPower> Zic - i dont know if thats tuneable, but we should probably investigate making that tuneable via configmaps
<Zic> yeah, it seems to be tuneable... at least the maxconn is in the starting process line of dnsmask container
<stormmore> Zic, I am no where near hitting a conntrack limit with the little number of containers I am running right now
<Zic> stormmore: I'm running ~120 containers but only 4 was responsible of the maxconn of dnsmask
<Zic> because they were too heavily request resolving
<stormmore> lazyPower, isn't a tunable option on the hosts /proc?
<Zic> (was Cassandra pods...)
<stormmore> I am only running about a dozen right now
<Zic> stormmore: but all that said, it's not a conntrack limit on my case, just a software configuration-max in dnsmask container of kube-dns pod
<stormmore> ah :)
<stormmore> my team haven't realized just what we can do with kube-dns yet
<Zic> if you want to verify this issue, just run a `docker logs` at dnsmask container (it's bad, you should prefer kubectl logs but in that case, I didn't find these logs anywhere else except directly with docker command)
<stormmore> not inclined to let them in on that yet, too many other things need my attention
<Zic> if you have something like "dnsmask : Max request reaching (150)" (from  what I remember), try to scale kube-dns
<lazyPower> i'm going to interject that it gives me an extreme happy face to see you two self helping and talking through these issues here...
<Zic> :D
<Zic> I personally scaled kube-dns to the number of nodes I have
<Zic> (one kube-dns per nodes so)
<lazyPower> Zic - there is a horizontal autoscaler addon for this case...
<lazyPower> i wonder if thats something worth investigating as its jsut another manifest
<Zic> yes, I know you can automatize that :)
<Zic> but as my nodes will not expand daily...
<Zic> and as you told me you will take a look for this autoscaler in CDK :D
 * Zic runs
<lazyPower> haha
<lazyPower> oh sure, put the flaming bag on MY doorstep, thanks Zic  ;)
<shewless_> Hi. I'm trying to bootstrap my juju 2.0.3 with a private openstack environment. I'm making good progress (instance created, etc) but it seems to "hang" on Installing curl, cpu-checker, bridge-utils, cloud-utils, tmux
<shewless_> any ideas?
<lazyPower> shewless_ - any proxy requirements in your env?
<shewless_> Here is my bootstrap line: juju bootstrap --metadata-source /home/test/simplestreams --config network=9a7d0138-ecf6-4c16-a894-e033e5be9631 --config use-floating-ip=true myclouder openstack
<shewless_> lazyPower: no I expect the floating ip will work
<shewless_> ..and from within the instance there isn't a proxy
<lazyPower> shewless_ - can you capture the same command but add --debug and pastebin the logs? i imagine we can get some output as to why its hanging there.
<shewless_> beauty.. I will do that
<cory_fu> jrwren: Sorry for the delay.  Comments added to the two MRs
<shewless_> also: I can't seem to ssh into the instance (public key). what's the default username created?
<jrwren> cory_fu: thanks!
<lazyPower> shewless_ - there are key credentials in $HOME/.local/share/juju/ssh   so you should be able to ssh -i $HOME/.local/share/juju/ssh/id_rsa ubuntu@$IP
<jrwren> cory_fu: great catch on the lint. I'm sorry I didn't catch that myself.
<Zic> lazyPower: just a question without -> do upgrade-charm is required/advised as pure-incremental? or can I "jump" an upgrade (imagine 1.5.1 -> 1.5.3 for CDK)?
<shewless_> lazyPower: thanks. I can see that resolv.conf doesn't have the nameservers I want in it. Is there an option for me to tell juju which nameservers to use?
<Zic> s/without//
<Zic> (or s/without/without real intention to do it/ if I remember what my fingers wanted to type :p)
<shewless_> lazyPower: trying to set the nameserver on my pre-created network to see if that helps
<shewless_> lazyPower: much better with a good DNS server! thanks for the help
<lazyPower> Zic - at present we dont have any dependent chains of charms so you should be g2g
<lazyPower> Zic - if thats not the case we'll certainly signal you before the 1.6 upgrade steps are published to help vet in your staging env
<lazyPower> shewless_ - awesome glad we got you unblocked :) sorry i stepped out for lunch in the middle of all that
<Zic> ok :)
<lazyPower> so close for your multi-master branch -https://github.com/kubernetes/kubernetes/pull/41910
<lazyPower> once that lands and fixes the queue we should be unblocked to land the multi-master patch and you can scale your control plane. couple that with an haproxy rewrite of the apilb and you should be in HA nirvana
 * lazyPower isn't ambitious or anything
<Zic> oh yeah
<Zic> with the new kube-api-loadbalancer charm, do you have plan for scaling it?
<Zic> like 2 apilb and a VIP?
<lazyPower> well we can give you the 2 apilb
<lazyPower> the impl of VIP would be up to your model environment
<lazyPower> you can ELB, you can floating ip, you can round robin dns
<lazyPower> or as you said, VIP
<lazyPower> however if you want to ELB you can probably just negate the apilb and go direct to ELB, and use that in place. but i dont have confirmation on any of that yet as its still WIP :)  but yes, i dont want to reintroduce a SPF to solve HA
<Zic> I can maybe implement an Heartbeat VIP directly on the VM which host the kube-api-loadbalancer
<Zic> I think there is no overlap with Juju deployed files
<Zic> lazyPower: this part (master & kube-api-loadbalancer) of the cluster is not hosted @aws
<Zic> lazyPower: I can share you something in private :)
<stormmore> lazyPower, well it is back :-/ going to go look at the docker logs in a little bit
<shewless_> lazyPower: Now I'm trying to add some charms via the gui... I see an instance is created in openstack but the charm just says "pending" - hints at where the logs are?
<lazyPower> shewless_ juju debug-log would be a good place to start
<lazyPower> if the instances aren't coming back you might need to remote into the controller or switch to the controller model and check the logsync
<shewless_> lazyPower.. hmm the logs are showing things happening.. maybe I just didn't wait long enough
<lazyPower> shewless_ - i can be a process sometimes depending :)
<shewless_> BTW will it download images as needed or do I need to provide them like I did in order to bootstrap?
<lazyPower> shewless_ - should download as needed, it pulls that data from simplestreams
<shewless_> lazyPower: so I have to add the images to the simplestream then?
<lazyPower> shewless_ - give me a moment to re-read thread. i'm in standup and want to answer this correctly
<stormmore> lazyPower, nothing there other than the sidecar's log showing the connection refused
<lazyPower> stormmore - that sounds suspect and similar to Zic's issue where its a conmax on the daemon
<lazyPower> stormmore - if you sale kubedns to 3 replicas does it continue to be a CLBO issue?
<Zic> to fetch the dnsmask logs, I needed to use `docker logs <container>` directly at the kubernetes-worker which host the Pod's containers, because through `kubectl logs` I have nothing interesting
<stormmore> scaling up now to find out :)
<Zic> days after, I think it was because I didn't know the `kubectl logs kube-dns --container dnsmask` :p
<Zic> +syntax
<stormmore> Zic, yeah I did that to confirm that I was seeing the same output in the container logs in the kubernetes dashboard too
<stormmore> OK it is scaled and green (for now!)
<stormmore> lazyPower, if you are correct, scale is not the issue since I basically had the default pods setup
<Zic> stormmore: you have maybe one container/pod which "talk" a lot to kube-dns
<Zic> in my case, it was not the number of Pods I have, this error was triggered by only 4 pods
<Zic> was Cassandra pods misconfigured, which try to querying kube-dns in loop
<stormmore> Zic, nope only basic nodejs containers that are not really aware of their environment yet.
<stormmore> lazyPower, definitely isn't working, the new pods are in a restart loop already
<Zic> stormmore: if you tail -f /var/log/syslog at your kubernetes-master, do you have any suspect lines ?
<Zic> as I understood, kube-dns healthz/readyness is checked through the API
<Zic> so check if you see any denied or error GET at kubernetes-master
<stormmore> nothing but a bunch of 200s
<lazyPower> hmmm... somethigns amiss if you aren't getting error output in the console and the api is giving you 200's
<lazyPower> stormmore - ok lets try to reduce to square 1, can you whole sale remove the kube-dns deployment and reschedule? the rendered template is in /etc/kubernetes/addons/
<lazyPower> stormmore - i'd like to get you to attach and tail the container logs and kubectl log output for the application pods until it reaches CLBO. we might catch something happening
<lazyPower> this is where i wish i had prometheus monitoring completed, we could likely scrape the issue out of the metrics.
<stormmore> yeah that would be nice too :P
<lazyPower> future me will appreciate it :)
<lazyPower> but present me hates that its not there
<stormmore> so basically you want me to detroy / recreate the kubdns deployment, right?
<Zic> (lazyPower: CDK plans to integrate Prometheus as default? or through a third-party charm?)
<lazyPower> third party charm - i'm 90% certain of that
<stormmore> as for the log output lazyPower, you want the container logs from all 3 containers in the pod, right?
<lazyPower> there are already helm charts to deploy prometheus if you want it today, but thats not a very juju-style answer. what do you do when your k8s is sick and you want that data? :)
<lazyPower> stormmore - yeah, we'll need all 3 to correlate
<stormmore> ack
<Zic> because I'm scratching my head to add metrics to our Nagios/Naemon by hands... if a Promethus charm automatize this, it will helps me a lot, I confirm :p
<lazyPower> Zic - thats the idea, you betchya
<Zic> my Naemon's metrics are curently just a bunch of curl to the K8s API status :/
<Zic> for pods, nodes, services...
<Zic> it's kinda creep
<lazyPower> Zic - in the interrim there's always BEATS
<lazyPower> and with metricbeat you can create custom metrics fairly easily
<lazyPower> which could in turn handle that and stuff it in ES to be graphed with kibana
<Zic> Beats just sond like horrible earpods in my head
<Zic> do I miss something? :D
<lazyPower> elastic released golang based agent's to ship arbitrary metrics
<Zic> s/sond/sounds*
<lazyPower> Zic https://insights.ubuntu.com/2016/09/22/monitoring-big-software-stacks-with-the-elastic-stack/
<Zic> "elastic stack" sounds as ELK now, ARGH! :p
<lazyPower> its the successor to ELK
<Zic> oh nice
<Zic> because my current Naemon checks looks like (ugly) this: `curl -sk https://<kubeapi-load-balancer/api/v1/pods | grep phase` for Pods status for example
<Zic> I'm trying to avoid grep and use "jq" instead, as it stdout is JSON
<lazyPower> yeah, metricbeat can just poll that whole json feed, and stuff it in ES
<lazyPower> you can then subquery in teh dashboard to make nice timeseries charts out of it.
<lazyPower> or do additional parsing in logstash, whatever your business logic is
<lazyPower> the idea is to be flexible and give you a swiss army knife to make meaningful reports based on whats important to you as an operator / analyst
<lazyPower> thats why i fell in love with beats, you dont have to code your app to integrate with it like you do with prometheus
<stormmore> well it is re-provisioned and running green for now, going to make a cup of tea and see if it can last at least that long
<Zic> my main concern is for alerting (we have TVs which displayed the current status of all our platforms at office) and mail-alerting/SMS for our on-call rotation
<lazyPower> stormmore - ok, thanks for doing the debug work, i'm concerned that theres a hidden dragon in here we've not encountered so therefore we aren't accounting for.
<lazyPower> you're the second user thats reported kubedns failures in < 1 month. its likely that release of the addon might just be hinky
<Zic> all these are linked to Naemon for now
<Zic> but as we're testing Prometheus of others platform, it will be nice to have it for CDK also :D
<Zic> s/of/for/
<lazyPower> Zic - its future work but on the roadmap :) again, i'll ping ya when somethings brewing there
<lazyPower> happy to help you clean up addon services to replace with charms, because thats how we roll
<stormmore> not a problem, least I can do :)
<stormmore> although the bad good news might be that the rescheduled deployment might have solved the problem
<Zic> in any case, even if I don't have anymore CLBO or dnsmask maxconn reached on kube-dns, it continues to restart sometime, but as I scaled them, at least they are not restarting at the same time, so no unavailability : http://paste.ubuntu.com/24048434/
<Zic> look at restarts column
<Zic> as lazyPower said, this kube-dns release seems to not be so stable :/
<stormmore> so far I am not seeing any connection refused errors in the sidecar container which was what I was setting before
<lazyPower> stormmore - might be a sync issue :|
<lazyPower> i'm not impressed with this release of kubedns. when we circle back to the 1.5.3 release we'll grab the latest manifests for that rel and see if we can get this resolved via addon bumps
<lazyPower> but i'm not hopeful
<Zic> it's not blocking as I have like stormmore a hard CLBO before, now it's scaled, I just have some "instant-restart", and not all at the same time
<Zic> but it's weird :/
<stormmore> does make me question the decision to use kubernetes / docker vs some lxd type environment right now
<stormmore> dns seems critical enough to me that it needs to be stable
<lazyPower> i'm sure if we gather enough info and bug it, that it'll get fixed
<lazyPower> we just need to find that root cause and get it contributed
<lazyPower> if its biting us, its biting other users
<lazyPower> and i'd rather not throw the baby out with the bath water :) but on that note stormmore - i'm more than happy to support you in a move to lxd as well. because LXD is the bees knees
<stormmore> oh agreed, questioning isn't going to keep me from figuring it out
<stormmore> lazyPower, I just need to do my research on Docker to LXD
<Zic> we're not planning to use Kubernetes and LXD at the same place here, we're using Kubernetes as PaaS (= our customer managed which pods are running, we are managing the deployment and the availability of the cluster) ; for full-managed infrastructure, we're curently using VMware ESXi or Proxmox, and LXD will be part of this list
<stormmore> lazyPower, from the little research I have done, outside of maybe Juju, LXD management / orchestration isnt as mature as k8s
<Zic> TL;DR: Kubernetes as Docker's orchestrator / LXD as hypervisor, even if LXD use LXC-component of containers, it's more like VMs
<Zic> the only things that have a "versus" to Kubernetes is Swarm or Rancher, with less features
<Zic> (we have some Rancher here, and our PoC of Swarm was not satisfying)
<stormmore> oh I definitely get that by management, I mean things like the kubenetes dashboard level maturity
<lazyPower> ehhhhhh
<lazyPower> not so sure i agree with that sentiment, but i'm clearly biased
<lazyPower> so i'll let you come up with your own conclusions, however lxd has been in prod here at canonical since release, and before that with lxc.  flockport even wrote an entire hosting company based on lxc
<stormmore> lazyPower, don't get me wrong the juju gui is one of the nicest guis I have seen but it doesn't have the level of data that the kubernetes one does
<lazyPower> well sure, those are wildly different use cases
<lazyPower> the juju gui is only intended to be used for modeling your applications and then routine tasks. There hvae been many requests to integrate things like ssh in the browser, log aggregation, etc.
<lazyPower> i dont think we've had the manpower to promote that in priority however, as other things like model migrations and what not have taken precedence.
<Zic> I didn't touch the Juju webUI since I finished the deployment personally, I'm doing all post-stuff with the juju-cli only
<lazyPower> which are arguably larger / important features for the core of juju to have.
<lazyPower> yeah
<lazyPower> we find that most operators tend to do that
<Zic> and it was for our baremetal/manual installation
<lazyPower> myself included, i look at the juju ui during testing only or when i'm mocking something up quick to share.
<Zic> for labs, with conjure-up, I didn't use Juju WebUI at all
<lazyPower> but that being said
<lazyPower> comparing apples/oranges here :)
<lazyPower> i found this though
<lazyPower> https://github.com/aarnaud/lxd-webui
<lazyPower> i haven't used it, and it looks a bit long in the tooth- 9 months since last contri - however... looks neato
<stormmore> not faulting juju at all, just s saying from a cluster management perspective the kubernetes dashboard is awesome
<lazyPower> stormmore - well its a good thing we grabbed it for part of the CDK :D I'm happy i could deliver on that one
<lazyPower> stormmore - still no issues with kube-dns?
<Zic> here it's that way: LXD is used as an hypervisor (and have a take on Proxmox, VMware ESXi, KVM) even if it use LXC-container-echnologies-inside ; Juju is used as a tool to deployed and managed highly-complicated platform like K8s or OpenStack ; Kubernetes is used for a customer which come with "I have 100 docker running at a raw-dockerd, do you have something to orchestrate them and pass to production?" :)
<lazyPower> if its running idle as it should right now, i fear we're running into a race condition with the pods or a sync issue or something similar. Just keep that pipeline open and if you catch something dump the logs and lets bundle up for an issue, even if its inconclusive results.
<stormmore> lazyPower, Zic I use the command line more often than not for things but it is always nice to have readily available "pretty pictures" to show people
<stormmore> and lazyPower still green
<lazyPower> ok, sounds good. Thanks again, i'm going to context switch back into the etcd bits and focus on landing this multi-master PR
<lazyPower> ping me if you need anything. otherwise i'm resuming silent operation
<stormmore> yeah I am going to go back to determine my permissions issue solution
<Zic> lazyPower: the good thing I note for later is that you're at the middle of your office-day when I'm on-call rotation :p
<Zic> (it's 21:19 here o/)
<Zic> and I'm on-call this week :p
<stormmore> I can happily say I am not on call at the moment :)
<Zic> :D
<stormmore> oh that just means I have more time to architect and design environments for now
<cory_fu> jrwren: http://pastebin.ubuntu.com/24048767/ on the test now
<cory_fu> jrwren: And on the other MR: http://pastebin.ubuntu.com/24048870/
<kwmonroe> petevg: i need your unit test guru status.  i wanna unit test actions.  my actions have hyphens in the name "do-stuff" wihtout a .py extension.  that makes imports hard.  my workaround is to have "do-stuff" import ./stuff.py, and call "stuff.action()".  then i just unit test stuff.py.  any better way?
<petevg> kwmonroe: Your way sounds pretty good. Not naming a python script "blah.py" is kind of an anti pattern, so the workaround isn't necessarily going to be pretty.
<petevg> kwmonroe: you could also copy the file to a temp dir, with a ".py" extension, and import it from there.
<petevg> kwmonroe: ... or you could try to hack on Python's import command, to make it work wit a non .py file. Shouldn't be too scary, but I don't know what you'd do off the top of my head.
<kwmonroe> omg petevg, i don't know why i talk to you.  you went from "sounds pretty good" to "this is gonna hurt" in like 2 messages.
<petevg> Just trying to be helpful :-)
<kwmonroe> well i'm all for anti anti-patterns, but i don't belive actions can have a suffix, which makes this particularly annoying
<lazyPower> lib + wrapper, or action == executor action.py == library
<kwmonroe> i do appreciate the alternatives petevg!  just giving you grief.  also, it's 85F here in texas, i'm coding by the pool.  how's shoveling your driveway going?
<lazyPower> i dont know what that would do tho, if you use foo executor and foo.py... if it would give you grief during import
<petevg> kwmonroe: The snow is actually basically melting, because it's kind of the same thing as being 85F here, relatively speaking :-p
<kwmonroe> wait lazyPower, i don't follow your == suggestions.  are you suggesting symlink the action to action.py?
<petevg> Ooh. A temp symlink > temp file.
<lazyPower> symlinks would work
<lazyPower> slightly opaque, but doable
<kwmonroe> i know the bashism to call a method based on $0, but how do you do that in python?
<lazyPower> sys.argv[0]
<kwmonroe> nice
<kwmonroe> nm, i hate that for the same reason i hate trying to follow old charms with links to hooks.py.
<kwmonroe> i'll just shell out to "java -jar myaction.jar <func>" like matt taught me.
<Siva> I  want to know number of units of a charm deployed in the charm code
<Guest1503> How can I find that?
<petevg> kwmonroe: it looks like there's a way to do it, no hacking needed, with the "imp" module. Or so says Stack Overflow: http://stackoverflow.com/questions/2601047/import-a-python-module-without-the-py-extension
<Guest1503> I want to find out the ipaddress of all the peer units deployed. How can I find that from within the charm code?
<petevg> Guest1503: if the peers in the charm have a relation to each other, you can query the conversations in the relation.
<petevg> For an example, see the interface the Zookeeper charm uses to wrangle its peers at https://github.com/juju-solutions/interface-zookeeper-quorum/blob/master/peers.py
<petevg> Guest1503: this assumes that you're writing a layered charm using the reactive framework, and it does require writing an interface. If you have some trouble figuring out how everything works, I might be able to answer specific questions. cory_fu might be able to help you out, as well.
<Guest1503> @petevg, I am not writing layered charms....normal charms based on the hooks
<petevg> Guest1503: I'm afraid that you've stepped outside of my area of expertise, then :-/ Does anyone else have any advice on doing interface style stuff in an older style charm?
<Guest1503> @petevg, in my case I want to make sure I have all the peer ip's before I do some operation. The problem is how do I find out the num_units you specify in the bundle in the charm code?
<cory_fu> Guest1503: Either way, you will need to use a peer relation.  On the <peer-relation-name>-relation-joined hook, you should be able to use `relation-get private-address`
<Guest1503> @cory_fu, Yes that will work.. but for me I need a way to know if all the peer relation ip's have been fetched
<Guest1503> How do I find that out?
<cory_fu> Or you could iterate over the peers in any hook using related-units, and call relation-get with an explicit relation-id and unit
<petevg> Guest1503: I'd consider refactoring so that your charm can handle an additional peer joining after you've done whatever processing that you're doing. Someone can add another peer with "juju add-unit" at any time, so the code will need to handle the case where you add a peer, anyway.
<cory_fu> What do you mean by "all the peers"?  Each unit will be able to see all of the peers that are connected to it, though it might take a small amount of time before a new peer is connected to all of the other units
<cory_fu> Right, what petevg said.  You can always add more units
<cory_fu> Or remove them
<Guest1503> @cory_fu, say I deploy 3 units of a charm using  a bundle... so you recommend the 'for' loop for peers in a some other hook rather than <peer_relation_joined _hook?
<petevg> Guest1503: the "best practice" pattern is "this hook/event fires off when I have a new peer on my relation, and I do the appropriate thing." There isn't really a "wait until all my peers have come up" event, because you can never be confident that the operator is done adding peers.
<petevg> Guest1503: so the <peer_relation_joined_hook> is the correct hook. It just needs to do the right thing whenever a new peer joins.
<petevg> Guest1503: in zookeeper's case, it writes out the peer to a list in a config file, then restarts the zookeeper daemon.
<petevg> cory_fu: if I want to grab the latest cwr-ci bundle, this invocation should do it, right? `charm pull cs:~bigdata/bundles/cwr-ci`
<petevg> (Or did we move it out of bigdata?)
<cory_fu> petevg: It's in ~juju-solutions
<petevg> ... and it's singular bundle, rather than plural.
<petevg> cory_fu: thx.
<petevg> Apparently, my bash history is all lies.
<jrwren> cory_fu: are you using a different lint tool? `make lint` returns nothing for me.
<cory_fu> jrwren: Lint is fine now, it's `make test` that's failing now.  And the config-changed hook in the other MR
<jrwren> cory_fu: I can't repro that either. :(  not saying its not happening though. I'm sure its something strange about my setup.  Thanks for the fast feedback. I'll have fixes tomorrow.
<cory_fu> k
#juju 2017-02-23
<sur_> anrah:
<sur_> are you there
<sur_> sur:
<sur_> sur_:
<anrah> now i am
<kjackal> Good morning Juju world!
<Zic> hi here, if I want to remove on of my kubernetes-worker in a CDK bundle, do I simply need to remove that machine from Juju and charms will do the proper cleanup expected?
<brym> morning all
<magicaltrout> lazyPower: lmk when you're in your seat
<Zic> -> to answer myself, yeah, the kubernetes-worker charms manage remove-unit well, I did a test on my test cluster
<Zic> it was cleanly remove from Kubernetes before removing it from Juju
<joedborg> hello all.  does anyone know if there's a juju command to monitor which set_states are being called in real time?
<kwmonroe> joedborg: this isn't what you asked for, but in case it's helpful, i use this to monitor states on a given unit:  juju run --unit foo/0 'charms.reactive get_states'
<kwmonroe> that's not real-time, but it's useful to know what states are set to debug why a handler is (or isn't) firing.
<joedborg> @kwmonroe thanks
<joedborg> @kwmonroe although everything seems to come back as None value, is this normal?
<kwmonroe> joedborg: yeah, i don't think state keys have 'values' per se.  cory_fu might have more enlightening things to say about the normalcy of "state: null", but fwiw, here's an example of the states set on a hadoop slave:  http://paste.ubuntu.com/24054153/
<kwmonroe> ^^ and that's normal
<cory_fu> Right, the only states that would have values are relation states.  That helper should really just be returning the keys
<cory_fu> kwmonroe, petevg: Travis passing: https://github.com/juju-solutions/layer-cwr/pull/94
<kwmonroe> crap, i guess i have to merge stuff now
<kwmonroe> merged cory_fu.. don't worry about releasing cwr-52 to stable, i'll cut a new one that picks this up.
<kwmonroe> hahah cory_fu, you just conflicted kjackal (https://github.com/juju-solutions/layer-cwr/pull/93).  payback, eh?
<cory_fu> :)
<joedborg> @corey_fu @kwmonroe: ah, so these all are one that have been fired?
<joedborg> @cory_fu ^^
<kwmonroe> joedborg: 'charms.reactive get_states' will show you all the states that have been set.  that doesn't necessarily mean any handler has fired, only that it could if it's written to react to those states.
<joedborg> @kwmonroe ah okay, i'm trying to see which ones have been fired to debug an issue
<admcleod_> joedborg: i find this useful: jgs() { juju run --unit $1 "charms.reactive --format=yaml get_states"; }
<stormmore> o/ juju world
<redir> \o stormmore
<stormmore> are we having fun today?
<kwmonroe> joedborg: i don't think there's a history of handlers that have fired outside of "juju debug-log -i unit-foo-0 --replay" and fancy grepping...
<kwmonroe> still, if you see a state has been set, it's pretty safe to assume any handler reacting to that state has fired.
<joedborg> @kwmonroe thanks
<kwmonroe> np joedborg - if you need any help debugging, just holler... i'm happy to dig through debug-log output with you.  well, maybe not happy, but i'll do it :)
<joedborg> kwmonroe: thanks man, I'll probably take you up on that offer :)
<kwmonroe> cory_fu: did we have a rationale for making hadoop-client single-series?  https://jujucharms.com/hadoop-client/.  i think we didn't want to mess with the trusty version (perhaps back when the java interface was added?) but i need to rebuild it and am curious if we should just make it multi.
<cory_fu> kwmonroe: I really don't recall
<tvansteenburgh> lorenzotomasini: https://github.com/juju/python-libjuju/pull/56
<tvansteenburgh> lorenzotomasini: that addresses the issues we talked about last week. trying to get some tests in there before merging, but hoping to release those changes soon
<rick_h> tvansteenburgh: ping
<tvansteenburgh> rick_h: pong
<rick_h> tvansteenburgh: back from a break and poking at this adding of list models method
<rick_h> tvansteenburgh: is there some pattern established for turning a Model object from the _client into a Model object from model.py?
<rick_h> tvansteenburgh: I feel like I should get the facade bits back and build a proper Model object out of it somehow but not seeing anything atm
<tvansteenburgh> rick_h: got any code i can look at? i don't entirely follow
<rick_h> tvansteenburgh: https://pastebin.canonical.com/180574/ is what I've got in my head/current code
<rick_h> tvansteenburgh: basically I'm looking that I shouldn't expose the end user of the controller/model code to the internal object types used in _client.py
<rick_h> tvansteenburgh: so I'm wondering if there's any pattern for transforming one into the other
<tvansteenburgh> rick_h: so you'll want to make new Model() and then call connect(...) on them
<rick_h> tvansteenburgh: right, but list_models should return what, in your mind?
<rick_h> tvansteenburgh: in the spirit of the library atm
<tvansteenburgh> imo it should just return strings
<rick_h> tvansteenburgh: of what? uuids? names?
<tvansteenburgh> list of model names
<tvansteenburgh> or uuids
<bdx> is remote lxd provider a thing now?
#juju 2017-02-24
<Budgie^Smore> Hey lazyPower have you run a redis cluster in k8s yet?
<lazyPwr> Budgie^Smore: Its been a minute, but yeah it was part of the petstore examples
<Budgie^Smore> yeah I was thinking of doing it stateful :)
<Budgie^Smore> although the google example only uses rcs
<lazyPwr> StatefulSet
<Budgie^Smore> yup
<lazyPwr> that would be a solid contribution back to their examples dir too
<Budgie^Smore> watching Hightower at RedisConf right now
<lazyPwr> Right on :)  Got a livestream link?
<Budgie^Smore> just a youtube vid from last year
<Budgie^Smore> https://www.youtube.com/watch?v=0hlv9iIncik&list=PL83Wfqi-zYZHtHoGv3PcGQA3lvE9p1eRl&index=16
<kjackal> Good morning Juju World!
<anrah> Hi! What is the username that each node uses when connecting with controller?
<anrah> I can see the password on agent.conf but I don't figure out the username
<lorenzotomasini> tvansteenburgh: thanks, we will for sure make use of the new version of the juju lib as soon as possible, but i think that we won't be able to migrate to the new one in time for the MWC demo
<pranav> Hi Folks. Does anyone know if charms for openstack release are available yet?
<pranav> sorry i meant openstack ocata release
<rick_h> tvansteenburgh: have you seen scripts bombing out due to "OSError: [Errno 9] Bad file descriptor" before?
<tvansteenburgh> rick_h: yes, was just working on that yesterday
<tvansteenburgh> rick_h: typically happens when you don't clean up connections before closing the event loop
<rick_h> tvansteenburgh: any clue on what tossed it? the failures are coming out of python code vs my code so not sure wtf happend
<rick_h> tvansteenburgh: oic
<tvansteenburgh> rick_h: i can look at code or hangout if you want
<rick_h> tvansteenburgh: that's a good hint. I'm not cleaning stuff up. Will poke at it
<tvansteenburgh> rick_h: cool, lmk if i can help
<rick_h> tvansteenburgh: https://pastebin.canonical.com/180649/ is what I'm hacking on atm
<rick_h> tvansteenburgh: so assume I need to add some cleanup to my stress runs
<tvansteenburgh> rick_h: yeah you probably want to await cont.disconnect() and await stress.disconnect() at the end of the stress func
<rick_h> tvansteenburgh: makes sense ty
<jamespage> marcoceppi: ok our openstack charms release is all clear - feel free to move charm-helpers when you're ready
<jamespage> if there is anything I can do to help just shout
<marcoceppi> jamespage: thanks!
<mimizone> hi all.
<mimizone> after creating a juju controller via maas, when deploying a model to the same machine, the machine-0 endsup in the "down" state. How can I investigate?
<mimizone> I did the 2 following commands
<mimizone> juju bootstrap r16maas --debug --to osv16ocp1b
<mimizone> juju deploy juju-gui --to osv16ocp1b
<mimizone> I use Ubuntu Xenial, Juju 2.1.0 and MAAS 2.1.3
<mimizone> when sshing into the "machine 0", the juju-machine-0 service can't be started. It says it's not found.
<stormmore> anyone care to explain to me the logic behind having the controller model instead of having the controller be part of the default model?
<rick_h> stormmore: because folks would bootstrap and start deploying
<rick_h> stormmore: and the controller model cannot be destroyed
<rick_h> stormmore: so this way, folks are put into a safe place right after bootstrap, the controller model is kept apart, and users are encouraged to use the default, create their own, and eventually remove the default model all safely
<stormmore> rick_h, I get that. Just seems over kill to have to assign a full machine / instance for juju. Not so big a deal with cloud instances but bare metal...
<rick_h> stormmore: you're completely free to switch over there and run things there
<rick_h> stormmore: that tends to be the use when folks use one controller per deploy, but in a multi-model world models are cheap
<jhobbs> stormmore i usually make a VM on the server that's running MAAS and put the controller on it
<rick_h> so having one machine be the controller for dozens of models is what we're defaulting to
<rick_h> stormmore: what jhobbs said ^
<rick_h> I know of folks that setup a lxd, register it in maas, and deploy to it in that way
<stormmore> rick_h, oh I know that too, just saying. most likely I will go multimodel at some point
<rick_h> stormmore: but especially if you head toward HA/etc we default to the middle ground
<rick_h> not too scale down, not so scale up
<stormmore> jhobbs, yeah that is the conclusion I came to too ;-)
<rick_h> for us it's a case of sane/safe defaults. There's more options available that will hopefully work for what you need
<stormmore> rick_h, I actually thought about doing it as an LXD on the same node I was installing MaaS on but was wondering if it would be better to have it on the node that is going to be the cluster master and run other services on it
<rick_h> stormmore: I guess "better" with what criteria is the question
<stormmore> rick_h, well the basic objective is to minimize the amount of resources required for managing a cluster. for instance I am planning on moving the maas postgres db into the cluster once it fully up
<rick_h> stormmore: interesting
<stormmore> The plan is to have a bootstrap node that it "removed" once a data center is fully functional
<stormmore> yes it means making sure everything is HA
<stormmore> rick_h, am I understanding you correctly, people are getting maas to manage some LXD containers as well as other bare-metal?
<jhobbs> yeah, i've never heard of that :) if that's possible I'd like to know more
 * rick_h checks his sources
<stormmore> you and me both jhobbs :) to me even VMs seems overkill for building a management node
<rick_h> ah, kvm is what I'm thinking of: https://blog.felipe-alfaro.com/2014/05/29/how-to-configure-maas-to-be-able-to-boot-kvm-virtual-machines/ and the like
<mimizone> still having issues here with the machine-0 reported down in the juju status. fresh bootstrap install via maas/physical, and trying to put the juju-gui on the machine-0
<stormmore> yeah I was pondering that, just not as experienced with KVM as other virt tech :-/
<rick_h> http://askubuntu.com/questions/665741/how-to-create-a-lxd-container-on-a-machine-and-register-it-in-maas for the lxd side
<rick_h> mimizone: what version of juju?
<mimizone> juju 2.1.0-xenial-amd64
<rick_h> mimizone: juju gui comes bundled ootb
<rick_h> mimizone: just run juju-gui
<rick_h> sorry
<rick_h> "juju gui"
<rick_h> as for the reporting of down, is the machine down? can you ssh to it?
<rick_h> mimizone: this is after you "juju switch controller" and see machine 0 there?
<mimizone> yes I can ssh to the machine and I see jujud-machine-0 service is up
<mimizone> juju switch controller? I must have read the wrong documentation, never saw this step
<rick_h> mimizone: right, so in juju 2, there is the ability to operate multiple models
<rick_h> mimizone: and out of the box, you end up on an empty model w/o access to the controller itself so that it's safe
<rick_h> mimizone: you can switch to the controller and see machine 0 by using "juju switch controller"
<rick_h> mimizone: check out 'juju models' as well
<rick_h> mimizone: but the gui is built in so you can just run the 'juju gui' command and get it to open up.
<mimizone> Aaaaaaahh :) much better with the right explanation :)
<mimizone> good to see the gui is built in
<rick_h> mimizone: if you're new to juju2 I'd check out https://jujucharms.com/docs/stable/introducing-2
<mimizone> rick_h: thanks
<rick_h> mimizone: and some videos from the past: https://www.youtube.com/results?search_query=juju+2.0
<mimizone> I will check. I am doing all this for the JOID model for OPNFV
<mimizone> rick_h: thanks again. was completely far from understanding all the changes.
<rick_h> mimizone: all good, let us know if you have any questions
<rick_h> mimizone: hopefully some goodness in there for you to discover
<stormmore> https://insights.ubuntu.com/2015/01/15/virtualbox-extensions-for-maas/ is what I am using for inspiration right now
<stormmore> hey lazyPower so 2 days with a restart on kube-dns :)
<petevg> cory_fu, kwmonroe: https://github.com/juju-solutions/matrix/pull/84
<petevg> kwmonroe: thx for the +1. I went ahead and merged the code :-)
<kwmonroe> i trust you completely petevg :)
<petevg> I will strive to live up to your trust. Or something :-)
<kwmonroe> cory_fu: does this look familiar?  http://paste.ubuntu.com/24061228/  i just built a local jenkins charm, now it's complaining about layer.options('basic')
<cory_fu> kwmonroe: o_O  The function is definitely there: https://github.com/juju-solutions/layer-basic/blob/master/lib/charms/layer/__init__.py
<cory_fu> kwmonroe: Any chance you have an old copy of layer-basic checked out locally?
<kwmonroe> lol cory_fu, my bad
<kwmonroe> i'm dumb... "find . -name __init__.py -delete" is rarely what you wanna do.
<cory_fu> ha
<cory_fu> I was about to be worried
<kwmonroe> nothing like a catastrophic layer-basic failure at 5pm on a friday, #amirite?
<cory_fu> :)
#juju 2017-02-25
<Budgie^Smore> I just found the "inspiration" for hardening maas once I have my cluster bootstrapped :)
<stormmore> ok that feels better... weird how not having a tool really puts you off your game
<stormmore> hey lazyPower have you seen the ATT Community Development github repos?
<ybaumy> hi. im trying to deploy openstack with juju charms which creates containers inside containers. but somehow the permissions are not inherited
<ybaumy> im using xenial
<ybaumy> show-machine says for the containers that creating container: failed to change ownership of /var/lib/lxd/containers/juju.../rootfs
<ybaumy> i set security.nesting to true but that doesnt change the behaviour
<lazyPower> ybaumy: are you using zfs pools?
<ybaumy> lazyPower: no
<ybaumy> i filled a bug report to see what the devs says to this
<ybaumy> so i read i had to do an lxd init to correctly inherit permission from the parent container. i thought maybe i could trick the deployment in setting up the machines manually and then starting the deploy process. but the deploy just created new machines and didnt use the ones i setup
<ybaumy> so same result
<ybaumy> there is a -to parameter for deploy but i dont know how to say .. like use machine from o1-5
<ybaumy> does anyone know how to bind bundles to a set of machines?
<ybaumy> is there somebody online from the devs in my timezone?
<ybaumy> which is CET
<rick_h> ybaumy: bundles don't support existing machines as that makes them not a reusable model.
<rick_h> ybaumy: there's a feature item to make that part of the deploy command so you can map the machines in a bundle to existing machines in a clean wau
<rick_h> way
<rick_h> But it's not available yet
<ybaumy> rick_h: can you explain me if its a bug that permissions are not correctly inherited in a container->container model
<rick_h> Permissions of what?
<rick_h> ybaumy: ^
<ybaumy> show-machine says for the containers that creating container: failed to change ownership of /var/lib/lxd/containers/juju.../rootfs
<ybaumy> like 0/lxd/0
<ybaumy> thats the problem im having
<rick_h> ybaumy: out of the box juju does not support nested containers with the default lxd profile. Doing so is a security concern. Lxd ships a a different profile for that use case.
<ybaumy> rick_h: so what can i do?
<rick_h> ybaumy: I know conjure-up and the lxd on OpenStack work (novalxd) do some updates to the profile to work.
<ybaumy> rick_h: i got conjure-up working but i wanted to try juju deploy
<ybaumy> rick_h: but i can live with that ... if there is no way
<rick_h> ybaumy: https://insights.ubuntu.com/2016/12/07/lxd-2-0-lxd-and-openstack-1112/ and some other notes with the OpenStack bundles/etc have some instructions
<rick_h> https://github.com/openstack-charmers/openstack-on-lxd/blob/master/README.md
<rick_h> ybaumy: I'd check out ^ as that comes from the team working on those OpenStack charms
<ybaumy> rick_h: ok i will. thanks for the links.
<rick_h> ybaumy: good luck
<ybaumy> thx
<ybaumy> im now trying vmware + juju lets see how to deploy openstack there
<ybaumy> rick_h: this looks much better ... no permission problems here. i should have tried that in the first place
<ybaumy> but too bad. the install hooks are exiting with error
<ybaumy> i guess i have to manually roll it out
<rick_h> ybaumy: what's the install hook errors?
<ybaumy> rick_h: i checked one server and it says in the machine.log in /var/log/juju that no kvm containers are possible
<ybaumy> rick_h: it then exits with 1
<ybaumy> that was on node 0 .. on node 1 i see the same message but also lxd init messages
<ybaumy> rick_h: its not possible in this state to run lxd init. which i understand since a lxd instance is already running
<ybaumy> i get the KVM message on every node
<ybaumy> btw im using xenial stable... should i switch to beta?
#juju 2017-02-26
<jamespage> ybaumy: stable and development charms are pretty much the same right now as we just released new stable versions
<jamespage> ybaumy: interested in how you get on with vmware - that should work ok but there are some gotchas with using LXD containers on vmware machines re networking
<ybaumy> jamespage: i must say i gave up already on vmware deployment.
<ybaumy> jamespage: i dont know how to fix the install hook errors with KVM im getting
<ybaumy> jamespage: nothing really seems to work out of the box. even the conjure-up openstack lxd way has errors since the networking isnt working
<ybaumy> jamespage: maybe i try suse cloud 7 ... ubuntu is too much fixing involed
<ybaumy> jamespage: im not a dev. just a system engineer with the task to evaluate openstack deployments
<ybaumy> jamespage: i like ubuntu but i would like to favor it but right now i cannot seem to get it working without support. i cant tell my management to first invest in support and then i might get it working. i need a working sample before we move forward with the decision which linux flavor we use.
<Zic> lazyPower: hi, I just checked my mailbox and saw that the PR for multi-master tokens was closed, do this mean that it's coming to the charmstore very soon? :D
<jamespage> ybaumy: the charms default to using KVM for virt; you'll need to set virt-type=qemu on the nova-compute charm
<jamespage> ybaumy: with regards to the conjure-up openstack problems you're having you should be getting networking OK
<jamespage> ybaumy: what spec of machine are you trying and all-in-one on
<jamespage> ?
<jamespage> ybaumy: its a way to start to understand juju/charms/openstack, but for a real POC you'll want to be using MAAS, rather than LXD
<jamespage> the LXD deployment is limited to a single machine
<jamespage> (qemu comment was in the context of using VMware)
<jamespage> if you want reference for the underlying bundles etc that conjure-up uses
<ybaumy> jamespage: will get back to this after football. im off
<jamespage> ybaumy: https://github.com/openstack-charmers/openstack-on-lxd and https://jujucharms.com/openstack-base/
<jamespage> are pretty much what conjure-up deploys (first on LXD, the second on MAAS)
<ybaumy> jamespage: well i will try maas then. i have setup a server already to deploy VM's in vsphere.
<ybaumy> jamespage: thanks for the links and the tip with the qemu
<ybaumy> jamespage: the thing is that in maas no commissioning doesnt work anymore. i havent changed a thing :D its like everyhing is a mess. maybe the daily image that got synced has a bug. fails with mbr magic error when trying to boot the second time
<ybaumy> jamespage: but as soon as this is fixed i will try again :D
#juju 2018-02-19
<ejat> hi .. i've executed : juju config nova-cloud-controller console-access-protocol=novnc
<ejat> but i didnt managed to view the instances console
<ejat> can someone help/assist me?
<SuneK> The --docker-logins configuration option for kubernetes worker, is not really working for me? Any insight on this? What is it suppose to do on the server, create an authentication file or?
<onthatbeat> I want to deploy the telemetry base without Ceph and containers - on four nodes. The nodes should be: One for Controller, one for compute, one for networking and one for storage. Is this possible?
<onthatbeat> Repost: I want to deploy the telemetry base without Ceph and containers - on four nodes. The nodes should be: One for Controller, one for compute, one for networking and one for storage. Is this possible?
<studentz> conjure-up the canonical distribution of kubetnetes: Running  machine configuration script is taking long time (40 mins). Any way to find the problem. Thanks
<rick_h> studentz: so if you've got it running it's driving a juju command underneath
<rick_h> studentz: you might try a "juju status" and see what comes out
<kwmonroe> SuneK: what's not working with --docker-logins?  it it perhaps trying to auth before docker-opts sets config?
<SuneK> kwmonroe I figured it out, the error was on my part, so it's working as expected after all :-)
<SuneK> Perhaps writing the output from the command in the logs, rather than just saying it exited with exit code 1 would be a small improvement
<kwmonroe> SuneK: didn't you hear?  "exit code non-zero" is the new "read my mind on why this failed"
<SuneK> Yeah, it's just that my mind reading skills are lacking :D
<kwmonroe> SuneK: fwiw, if you did have something in docker-opts that docker-login depended on, it very well might fail.  here's where we fixed it a couple weeks ago:  https://github.com/kubernetes/kubernetes/pull/59396
<kwmonroe> that fix hasn't quite made it to stable charms yet though
<SuneK> Thanks, that was not the case though, just a combo of invalid json and wrong credentials :D
<magicaltrout> they call him kevin... he works on a bank holiday..... they call him kevin he has all the github issues in quick links.... keeeeevin.......
<rick_h> damn, has his own song. Soon he'll have groupies following him around at sprints
<magicaltrout> hehe
<kwmonroe> what the heck holiday is it?  why isn't the post office taking my mail?  i've got a car load of bitcoin that i need shipped out asap.
<kwmonroe> oh, verified holiday.  k, be good folks!!
<rick_h> kwmonroe: heh, you not about celebrating our presidents?
<kwmonroe> when our president deploys cdk (successfully with --docker-logins), i'll celebrate.
<kwmonroe> SuneK: would you please file an issue where rc=1 could be better: https://github.com/juju-solutions/bundle-canonical-kubernetes/issues
<SuneK> Sure, it's specifically for the kubernetes-worker charm. In reactive/kubernetes_worker.py. Is that under source control somewhere? I haven't been able to find it
<kwmonroe> yup SuneK, if it's an issue with k8s-* charms, file it at https://github.com/juju-solutions/bundle-canonical-kubernetes/issues
<kwmonroe> SuneK: the source lives upstream, but we triage stuff with the bundle of cdk charms
<SuneK> Can I link to the file other than https://api.jujucharms.com/charmstore/v5/~containers/kubernetes-worker-81/archive/reactive/kubernetes_worker.py ?
<kwmonroe> sure SuneK - the upstream source for k8s-worker is https://github.com/kubernetes/kubernetes/tree/master/cluster/juju/layers/kubernetes-worker
<SuneK> Thanks, I will file an issue
<kwmonroe> thanks!
<magicaltrout> they called him kevin.....
<magicaltrout> he moonlighted as a kubernetes expert
<magicaltrout> keviiiiin
<kwmonroe> :)
<magicaltrout> you get a song cause you're at work
<magicaltrout> rick_h bailed
<magicaltrout> so he doesn't get a song
<kwmonroe> to be fair, i'm at the bank questioning my life decisions.  do i just take all the money since nobody's here, or do i juju?
<kwmonroe> bbl
<rick_h> magicaltrout: booooo
<magicaltrout> they call him rick_h .... he likes to tile... riiiiick hhhhhhh.... he skips out on meetings.... says its the dentist or something..... riiiiick hhhhh
<magicaltrout> better? ;)
<rick_h> Lol
 * magicaltrout enters X Factor
#juju 2018-02-20
<Zic> Hi here, I'm trying to run "juju bootstrap manual/localhost cdk" with the latest Juju version from Snap and it hangs up infinitely on "Running machine configuration script...".
<Zic> When I'm trying to find on what it hangs up exactly, I saw "apt-get --option=Dpkg::Options::=--force-confold --option=Dpkg::options::=--force-unsafe-io --assume-yes --quiet install cloud-utils"
<Zic> and when I run this manually myself, I got:
<Zic> The following packages have unmet dependencies:
<Zic>  cloud-utils : Depends: openjdk-6-jre but it is not installable
<Zic>                Depends: libbackport-util-concurrent-java (>= 3.1) but it is not installable
<Zic> Found the problem! apt policy cloud-utils reports that a package from our private local repository override cloud-utils from the Ubuntu Archive, sorry...
<magicaltrout> kjackal_: ping
<kjackal_> hello magicaltrout
<kjackal_> what's up?
<magicaltrout> ah quick CDK question
<magicaltrout> my understanding of the kubernetes-worker config is that
<magicaltrout> kubelet-extra-args is a KV config and the contents will be attached to the kubelet config
<magicaltrout> so i set
<magicaltrout>     value: maximum-dead-containers=3
<magicaltrout> but i don't see it set anywhere on the kubelets
<kjackal> lets see...
<kjackal> so these are extra arguments that will be used to start kubelet service with
<magicaltrout> yeah so i'd expect them to be in /var/snap/kubelet/current/args
<magicaltrout> or visible on ps aux |grep kubelet
<kjackal> yes they should be there
<kjackal> where do you set them? cojure-up during deployment? a bundle file? after the deployment?
<magicaltrout> after the deployment
<kjackal> strange...
<kjackal> let me think...
<magicaltrout> https://gist.github.com/buggtb/96a0d41472836a170f6bafacf087d41d
<kjackal> yes... I see it not being set. Give me 5 mins
<magicaltrout> winning
<kjackal> magicaltrout: I do not see this config param in kubelet: https://gist.github.com/ktsakalozos/2090e63f052d32b18b7605f1c4b36c2f
<kjackal> let me read the docs a bit
<magicaltrout> well even if it isn't would the charm parse it out?
<magicaltrout> https://kubernetes.io/docs/concepts/cluster-administration/kubelet-garbage-collection/
<magicaltrout> my understanding is GC flags get added to the kubelet
<magicaltrout> I might be wrong, but i'd expect it to blow up, not ignore my config request
<kjackal> xm... there are discussions of removing these flags as they are depricated. https://github.com/kubernetes/kubernetes/issues/40044 . But regardless we should be able to set those...
<magicaltrout> well yeah
<magicaltrout> but they aren't removed yet
<kjackal> true, would you be able to open an issue here: https://github.com/juju-solutions/bundle-canonical-kubernetes/issues
<kjackal> ?
<kjackal> magicaltrout: ^
<magicaltrout> sure
<kjackal> thanks
<magicaltrout> done
<kjackal> thank you, magicaltrout. My reply to the issue was to the team. ;)
<magicaltrout> other question
<magicaltrout> cause google is failing me
<magicaltrout> how do I get the deployed applications name in the charm?
<magicaltrout> nm
<utking> Hey guys! How can i deploy a saved model from the cli?
<utking> like juju deploy  --model maas-cloud-controller:gunny/utkingtest
<utking> just gives no charm og bundle specified
<magicaltrout> juju deploy mybundle.yaml
<magicaltrout> i think
<utking> ah ok :) do you have any idea where that file is?
<utking> the model was deployed with the gui, but i got some hook errors
<utking> so i fixed them, but it won't re-deploy my changes
<magicaltrout> that file?
<magicaltrout> its the export you did
<magicaltrout> bundle.yaml or whatever the download was called
<utking> thanks, a bit complicated tbh, it should have been saved locally, or at least been deploy when pressing the deploy button in the gui :p
<magicaltrout> if you save a model in the gui
<magicaltrout> it just downloads it doesn't it?
<utking> yeah it does, on my local computer, i then need to scp it in maas to run it, not a big problem, just a bit of extra work
<magicaltrout> do you not have juju running on your local computer?
<utking> no, it's been bootstrapped by maas on another node
<utking> :)
<magicaltrout> well if you have a network connection between your machine and the maas network, you could add the controller to you local setup
<magicaltrout> and then just run the commands locally rather than scping stuff around
<utking> Hmm, how do you mean? :)
<magicaltrout> so you have a controller, but you don't run the juju commands directly on the controller, well, not very often
<magicaltrout> so you can add the cert details and stuff to your local juju client assuming you have one installed and then switch to that contoller and run commands against it
<utking> yep! I'm running the commands on the maas node
<magicaltrout> yeah
<magicaltrout> but you could just run them on your PC :P
<utking> haha, i'm on windows >_<
<utking> ssh'ed into the maas node
<magicaltrout> https://jujucharms.com/docs/2.0/reference-releases
<magicaltrout> luckily
<magicaltrout> there is a windows client :P
<utking> haha, oh wow
<utking> i wasn't aware!
<utking> Well thanks m8!
<utking> i'll try that one out
<magicaltrout> no probs
<utking> is it less buggy then the gui online?
<magicaltrout> a buggy gui?!
<magicaltrout> I've not noticed any issues in the gui
<magicaltrout> but i don't use it very often
<utking> haha, it doesn't work at all for us, it bugs, deploys 10 times one charm than was specified etc
<utking> also if relations are changed, and redployed nothing happens
<magicaltrout> well i've not seen that in either environment but i'll take your word for it
<magicaltrout> i'm sure there will be some americans online shortly who'd be interested in knowing more
<magicaltrout> they get a bit sad when the gui doesn't work
<utking> hahaha
<utking> also the exported yaml file won't import in the gui, just get a bunch of relations errors
<utking> but it deploys fine in cli
<magicaltrout> i'm sure rick_h can help file some issues when he gets his bald head to the keyboard
<rick_h> magicaltrout: :p
<utking> hahahaha
<magicaltrout> oh hi rick_h
<magicaltrout> didn't see you there.....
<rick_h> sorry my head glare is toned down...
<utking> haha :)
<magicaltrout> nice to know you've been polishing....
<utking> polishing is never bad, in any way
<rick_h> utking: do you have a pastebin of the bundle yaml we can check out?
<rick_h> utking: since the gui and the go client parse things up differently sometimes things can get out of sync
<rick_h> would love to get a bug in place and get things fixed up for you
<utking> i'll make one
<rick_h> ty
<utking> https://pastebin.com/KQw7j2HS
<utking> there you go rick_h :)
<utking> that one i'm unable to import in the web gui
<rick_h> utking: hmm, so using hte gui it loads the yaml into uncomitted state. When you say relation errors is this the errors from the charms themselves
<utking> hmm, it just won't import them
<rick_h> utking: so it takes a few seconds here after it loads the charms.
<utking> hmm, so it loads them?
<utking> https://pastebin.com/DjDprdC2
<utking> try this one then, this gives me errors at least, i might have sent you the wrong one :p
<rick_h> https://usercontent.irccloud-cdn.com/file/DUJysJN5/2018-02-20_09-11-26.gif
<rick_h> so here's a gif of it loading up on jujucharms.com and preparing to be deployed
<rick_h> utking: right, what's 40851682$ ?
<rick_h> utking: that's related to shared-db but isn't anything the bundle/juju knows about
<rick_h> utking: so the gui is erroring stating that relation info doesn't make sense to it
<utking> hmm, it might been relating to ceph, but that was removed from the model :S
<rick_h> yea, I'd be curious where the number$ came from I guess as that's what's breaking the second bundle from working.
<utking> Hmm, what you can try is to download the openstack base, remove ceph, then export and import, see if that breaks it for you?
<rick_h> utking: yep, that's the issue. I've filed: https://github.com/juju/juju-gui/issues/3548 on the matter
<rick_h> utking: ty for confirming what was up
<salmankhan> Does someone know how to update dns entries in L X D containers alreay deployed by JUJU
<salmankhan> if I change /etc/resolv.conf or network interfaces file manually, it gets reverted on container restart
<rick_h> salmankhan: I'm not sure you can do that as that's stored and shared info. The idea would be to run dns if you wanted to stick custom names on things
<salmankhan> rick_h: actually I set the subnet dns in MAAS which got into the containers intefaces and resolv.conf rightly but now I want to change that but not been able to find any way to do it without destroying container.
<salmankhan> strange, can't find any doc as well how juju deploys LXD containers, I mean some indepth doc
#juju 2018-02-21
<rick_h> cory_fu_: ping, you able to join the juju show call today to talk about the releases you put out to the list ?
<cory_fu_> The reactive one is actually the thing I talked about in the previous show, I was just delayed in releasing it.  I could talk about the macaroon changes in libjuju, but I do have an appointment between 1 and 2.  What time is the show?
<cory_fu_> rick_h: ^
<rick_h> cory_fu_: 2pm est
<rick_h> cory_fu_: k, yea that's what I wanted to catch up on. Does this mean libjuju can now login/get a macaroon(cookie) and be used sans any copying .go-cookies around with jaas?
<cory_fu_> rick_h: Ok.  I will do my best to make it.  I should be able to but there's a chance my appointment might run over, or traffic getting back
<rick_h> cory_fu_: understand
<cory_fu_> rick_h: Well, yes, except the only thing I think we still don't support is the initial controller registration; I'm not sure what the process is for that, to sent the user, pass, and 2fa, so I'm not certain if that's already supported (though probably the browser-based login would work)
<cory_fu_> rick_h: But, if you already have the controller registered or if browser auth is ok, then it should work without the CLI
<rick_h> cory_fu_: hmm ok. I know folks hit some roadblocks with CI/CD workflows around the cookies and think this helps that somewhat
<rick_h> cory_fu_: k
<cory_fu_> Yeah, it should
<rick_h> cory_fu_: ty
<stub> Should I still have access to land charm-helpers MPs? Because I don't :)
<stub> https://github.com/juju/charm-helpers/pull/108
<stub> And if there is an enthusiastic reviewer interested in me not messing up cross model relations for them, https://github.com/juju/charm-helpers/pull/116
<rick_h> bdx: do you have an ES cluster bundle I can use for a demo?
<kwmonroe> rick_h: the elk-stack works, but doesn't do much (no inputs to ES): https://jujucharms.com/u/elasticsearch-charmers/elk-stack/.  there's also CDK-elastic, which works if you use the edge channel:  https://jujucharms.com/canonical-kubernetes-elastic/112
<rick_h> kwmonroe: k, I just grabbed the beats bundle for the moment
<kwmonroe> uhhhhhh...
<kwmonroe> rick_h: this one? https://jujucharms.com/beats-core/
<rick_h> kwmonroe: yea
<kwmonroe> i wish you hadn't done that
<kwmonroe> that's hot garbage
<rick_h> kwmonroe: just needs to look pretty
<rick_h> lol
<rick_h> of course it is
<rick_h> ok, /me tears that down
<kwmonroe> well, it's all trusty, and the beats are subordinates, so your principals will need to be trusty for that to work
<rick_h> kwmonroe: k
<kwmonroe> i mean, if you're just looking at the canvas, fine, but i would use elk-stack or canonical-kubernetes-elastic for something that somewhat kinda comes close to being actually halfway decent.
<rick_h> kwmonroe: k, will do
<rick_h> kwmonroe: ty for the feedback
<rick_h> yea, just need different visual demos to show new gui features with several realistic looking models
<kwmonroe> roger that
 * rick_h likes his pretty demos to be just a bit more interesting than mysql :P
<TheAbsentOne> Does anyone know a charm (reactive) that uses a mysql database intense? I am researching the possibilities to abstract the idea of a database and I'm really looking for some good examples to help my learning process of writing charms.
<rick_h> reminder!!!!! Juju Show #30 is just about 50min away so get your lunches out of the way
<rick_h> some pretty stuff to show off today
<kwmonroe> TheAbsentOne: hive uses the mysql interface.  when mysql gets related and becomes ready, the database.available state will be set, at which point hive will configure itself to use it:  https://github.com/juju-solutions/bigtop/blob/master/bigtop-packages/src/charm/hive/layer-hive/reactive/hive.py#L123
<kwmonroe> TheAbsentOne: the "configure itself" bits happen in the ./lib/charms/layer class here:  https://github.com/juju-solutions/bigtop/blob/master/bigtop-packages/src/charm/hive/layer-hive/lib/charms/layer/bigtop_hive.py#L75
<kwmonroe> TheAbsentOne: a much much simpler example is 'ubuntu-devenv', which supports relations to multiple databases, and simply logs the connection info when that relation is ready:  https://github.com/juju-solutions/layer-ubuntu-devenv
<kwmonroe> the logging bits for mysql happen here: https://github.com/juju-solutions/layer-ubuntu-devenv/blob/master/reactive/ubuntu-devenv.py#L77
<TheAbsentOne> awesome, thank you for the links kwmonroe I will take a look
<TheAbsentOne> `The devenv seems very interesting as it supports multiple technologies as well, very nice sir
<rick_h> get your glasses filled up for show time in 15 bdx kwmonroe agprado hml hatch cory_fu_ magicaltrout and anyone else interested. Links inbound shortly
<rahworkx_> rick_h: going to tune in for this one!
<rick_h> rahworkx_: woot woot
<rick_h> https://www.youtube.com/watch?v=tw0xw3cO5IQ is the url to watch the stream and https://hangouts.google.com/hangouts/_/lpeqvj6dczeutfosajoacuqvnae to join in the party
<rick_h> bdx: kwmonroe cory_fu_ hatch hml agprado and party people everywhere ^
<magicaltrout> anyone know what CUDA version is installed on CDK?
<knobby> magicaltrout: https://github.com/juju-solutions/layer-nvidia-cuda
<knobby> looks like 8.0.61-1 and nvidia drivers 375, but I felt like it was 384 for some reason
<magicaltrout> thanks knobby
#juju 2018-02-22
<BlackDex> Hello there
<BlackDex> i have juju 2.1.2 running. And i needed to remove a relation, but it isn't working
<BlackDex> it told me the relation is removed
<BlackDex> also if i retry the remove-relation, it tells me there is no relation
<BlackDex> but the subordinate charm is still there
<BlackDex> is there some way of forcing this?
<utking> Hi guys!
<utking> How can i completely re-deploy a bundle?
<BlackDex> utking: juju destroy-model <model-name>
<BlackDex> after that, juju add-model <model-name>
<utking> ah ok, thanks :)
<BlackDex> and then you can deploy again :)
<magicaltrout> BlackDex: whats your definition of "the subordinate charm is still there"?
<BlackDex> well, that it still is listed under the keystone charm in this case ;)
<BlackDex>  juju remove-relation keystone nrpe-xenial
<BlackDex> ERROR relation "nrpe-xenial:nrpe-external-master keystone:nrpe-external-master" not found (not found)
<BlackDex>  juju remove-relation keystone nrpe-xenial
<BlackDex> ERROR relation "nrpe-xenial:nrpe-external-master keystone:nrpe-external-master" not found (not found)
<BlackDex> oeps
<BlackDex> sorry
<BlackDex> https://paste.ubuntu.com/p/39jnBQ8tyZ/
<BlackDex> if i start a debug-hooks on one of those nrpe charms it sometimes starts a hook. And i tried running hooks/stop my self, but that doesn't do anything
<BlackDex> juju run etc.. is also working on these unit's
<BlackDex> but i can't seem to remove them
<BlackDex> tried restarting that container also, no effect
<kjackal> can you show us the full juju status BlackDex
<kjackal> ?
<pekkari> kjackal: o/
<kjackal> hey pekkari
<pekkari> hey, what is the best branch for contributions in apache/bigtop on github?
<pekkari> I see bunch of BIGTOP branches, master, some hadoops, some branch-*
<BlackDex> kjackal: one moment
<BlackDex> let mee see
<magicaltrout> pekkari: kwmonroe is your man for that
<kjackal> pekkari: the PRs should reach the apache bigtop repo
<kjackal> pekkari: here is the repo https://github.com/apache/bigtop
<BlackDex> kjackal: http://paste.openstack.org/show/mA0DWpKLQl875T1WnUdG/
<pekkari> magicaltrout: thanks!
<kjackal> pekkari: the process for submitting a PR is described here: https://github.com/juju-solutions/bigdata-community/wiki/Contributing and https://github.com/juju-solutions/bigdata-community/wiki/Bigtop-Patch-Process
<kjackal> BlackDex: I do not see the nrpe-xenial:nrpe-external-master keystone:nrpe-external-master relation in the relations list
<BlackDex> kjackal: no indeed
<pekkari> kjackal: indeed, sorry for the noise, just quickread and destroyed, that strategy doesn't work :)
<BlackDex> since i removed it ;)
<pekkari> thanks again!
<BlackDex> But the subs are still there
<magicaltrout> on a big data note, kjackal, just tidying up my relations stuff for saiku->apache drill->hadoop connections
<magicaltrout> so its zero command line
<magicaltrout> outside of getting data into hadoop
<kjackal> magicaltrout:  :)
<BlackDex> kjackal: And, if i do an upgrade-charm, they also are triggered.
<BlackDex> but i can't remove them any more
<BlackDex> The only option i think i have now is adding new keystone unit's
<BlackDex> remove the old once, if that will even work, since i think it will get stuck on removing the nrpe sub
<kjackal> BlackDex: what is the error in the nrpe-xenial charm?
<kjackal> can you show us the logs?
<BlackDex> let mee see
<BlackDex> it had something to do with opening/closing of a port
<BlackDex> i think 5666
<BlackDex> but that it had the value ICMP
<BlackDex> so what i did, i added "return" at the function, to skip that part
<BlackDex> that didn't resolved it
<BlackDex> i think someting went wrong with the upgrade
<kjackal> BlackDex: I would juju ssh onto that node and do a tail -f /var/log/juju/unit-nrpe-xenial-25.log . Then I would do a juju resolved nrpe-xenial/25 . And see in the logs what we get this time
<BlackDex> ill get the output for you
<BlackDex> one moment
<kjackal> If it is a bug in the charm we would better open a ticket
<BlackDex> http://paste.openstack.org/show/K73G7vbWDsriJ2ZUg2tC/
<BlackDex> it is that, or the upgrade went wrong
<BlackDex> i could try to overwrite the files /var/lib/juju/agent/x/x/x with an specific version
<BlackDex> and see if that will help
<kjackal> BlackDex: I am not sure about this charm. You should open a bug with https://api.jujucharms.com/charmstore/v5/nrpe/archive/metadata.yaml . What you could do for now is to replace /var/lib/juju/agent/unit-nrpe-xenial-25/charm/hook/upgrade-charm with something that is a noop
<BlackDex> it fails with update-config also
<kjackal> for example do a return on line 2 I think
<BlackDex> so that doesn't work
<kjackal> how does it fail now? What is the error msg?
<BlackDex> same error message
<BlackDex> oke
<BlackDex> i have removed ICMP from the .nrpe-config.ports
<BlackDex> that seems to get rid of the error
<BlackDex> hehe
<kjackal> :)
<BlackDex> but it still doesn't remove the unit
<kjackal> is it in error state?
<BlackDex> no
<BlackDex> no error state
<BlackDex> but it should be removed
<BlackDex> stopped
<BlackDex> killed
<BlackDex> whatever
<BlackDex> not still being there
<BlackDex> oke
<BlackDex> thx for the help i the right direction!
<BlackDex> lets see if i can create a report of this. because this had something to do with an upgrade error, an not something else
<BlackDex> all the others where fine
<ejat> hi, how do i use juju to make my ephemeral storage using ceph instead of local disk
<BlackDex> ejat: i haven't used it my self yet. But this page should help you: https://jujucharms.com/docs/stable/charms-storage
<BlackDex> Not, you need the latest version of juju v2.3
<BlackDex> s/Not/Note/
<ejat> BlackDex: ok thanks .. i just tested manually each of ceph-mon node ..
<ejat> the nova compute by default create volume on ceph not local disk
<bdx> charmstore down?
<bdx> https://imgur.com/a/X6ayw
<bdx> https://jujucharms.com/q/nova-compute
<pmatulis> bdx, here too
<rick_h> bdx: pmatulis some sort of outage tracking in the data center.
<rick_h> just verifying that it's true/folks on it
* rick_h changed the topic of #juju to: Juju as a Service Beta now available at https://jujucharms.com/jaas | https://review.jujucharms.com/ | https://jujucharms.com/docs/ | http://goo.gl/MsNu4I || https://www.youtube.com/c/jujucharms
<rick_h> bah
* rick_h changed the topic of #juju to: Charmstore outage currently in progress | Juju as a Service Beta now available at https://jujucharms.com |  https://jujucharms.com/docs/ | http://goo.gl/MsNu4I | https://www.youtube.com/c/jujucharms
* rick_h changed the topic of #juju to: Charmstore outage currently in progress | Get started with Juju https://jujucharms.com/docs/ | The Juju Show https://www.youtube.com/c/jujucharms
<magicaltrout> if jujucharms.com shafted?
<magicaltrout> if/is
<magicaltrout> apparantly its out on the town with rick_h
<rick_h> magicaltrout: :( I'm taking my ball and going home.
<_thumper_> ha
<magicaltrout> kwmonroe stole it
<kwmonroe> magicaltrout: jujucharms.com is perfectly fine.  it's only when you want to, ya know, do something besides look at the front page where you might have trouble.
<magicaltrout> hehe
<kwmonroe> magicaltrout: this is a perfect opportunity to polish up those drill bits.
<magicaltrout> okay the juju api is down
<magicaltrout> much like the snapcraft api about 6 hours ago
<magicaltrout> not my day
<rick_h> magicaltrout: rolling reboot for Spectre and meltdown...
<magicaltrout> i was actually wanting to record a quick video of my drill -> saiku no commandline data discovery thing
<magicaltrout> :P
<rick_h> So just not a good day
<magicaltrout> i also want to fix up my hdfs drillbit relation so we can do bigtop -> drill -> saiku no commandline hacks data analysis
<bdx> do the hadoop charms support network spaces?
<bdx> kwmonroe magicaltrout: what do I do if I want data-node <-> data-node traffic on a network other then my maas mgmt net
<bdx> looking at the layer code I cont find any network bindings in any of the metadata.yaml
<bdx> having a hard time thinking networking was over looked on those .... possibly I'm missing something ... or everyone is just using infiniband for their primary interface and I'm just behind the times ....
<kwmonroe> bdx: sorry for the delay, and sorrier for this reply:  the big data charms don't support network spaces.
<bdx> no worries, thx thx
<magicaltrout> oooh she lives
<magicaltrout> rick_h: by the way since this afternoon
<magicaltrout> juju gui on jaas seems completely hosed
<magicaltrout> or i'm doing something that prevents it working cause its been fine
<magicaltrout> but now its not putting workloads onto machines
<magicaltrout> kwmonroe rick_h or someone
<magicaltrout> https://www.dropbox.com/s/katu466hlbcbnf2/jujugui.mp4?dl=0
<magicaltrout> can you just validate my stupidity
<magicaltrout> i dunno if I'm doing something wrong, but i'm pretty sure I'm just doing what I've done 100 times
<magicaltrout> https://gist.github.com/buggtb/27de532516ce54764a8700671bba7e39 that video gave me this as a result of my deployment :)
<magicaltrout> everyone's out at ricks party :'(
<magicaltrout> ah this time it was even crazier
<magicaltrout> oh no this time worked
<magicaltrout> but I deployed 1 zk by itself then scaled that and added more charms
<kwmonroe> magicaltrout: i'm still watching your movies.  seems fine to me.
<magicaltrout> kwmonroe it doesn't deploy any software
<magicaltrout> which seemingly in a software management solution seems a slight glitch
<kwmonroe> no software can't fail. closing as designed
<kwmonroe> har hars aside magicaltrout... machines are started, but no apps deployed?
<magicaltrout> yeah
<magicaltrout> just sits there with 5 nodes
<magicaltrout> repeated that this afternoon and now
<kwmonroe> magicaltrout: which juju version? 2.3.3?
<magicaltrout> yeah
<kwmonroe> magicaltrout: i just made a new thing from jujucharms.com, and it seems to be good:  https://paste.ubuntu.com/p/DBBw3tFmQC/
<kwmonroe> magicaltrout: was this from a "juju gui" on a pre-existing env?
<magicaltrout> its just jaas
<magicaltrout> on a new model
<kwmonroe> hmph.. i'm using jaas for my controller too
<magicaltrout> yeah but if i launch 1 zk it works
<magicaltrout> but if i launch 3 scaled at deployment time it doesn't seem to
<magicaltrout> just trying again to test my reasoning
<magicaltrout> well thats fscked
<magicaltrout> this time i asked  for 3 units
<magicaltrout> it launched 3 machines
<magicaltrout> and it scheduling 2 nodes
<magicaltrout> living the dream
<magicaltrout> https://imagebin.ca/v/3sd1YIOgJKcJ
<magicaltrout> check that kwmonroe
<kwmonroe> ack, adding 3 zks.. gimme a minute
<magicaltrout> oh
<magicaltrout> the 3rd one has now shown up
<magicaltrout> weird
<kwmonroe> ugh, out of money.  magicaltrout, please send me a bitcoin
<kwmonroe> send me one now
<kwmonroe> so i can help you
<magicaltrout> hey
<magicaltrout> NASA haven't paid me in 3 months
<magicaltrout> i'm about to go out of business don't ask me for cash :P
<kwmonroe> lolcry
<magicaltrout> okay 3 zk nodes works
<magicaltrout> 3 zk nodes with a relation
<magicaltrout> fails
<magicaltrout> its just done it again
<magicaltrout> that is 100% repeatable
<magicaltrout> i'll file a bug
<kwmonroe> magicaltrout: i may be seeing the same.. i just added 3 zks to my model, i see machines coming up -- https://paste.ubuntu.com/p/PKC45GN5Ns/
<kwmonroe> but no zk
<kwmonroe> let's see if that changes after the machines come up
<magicaltrout> yeah
<magicaltrout> ...
<magicaltrout> it wont :P
<magicaltrout> in any normal deployment i see, services going into a waiting state before the machines go live
<magicaltrout> I've got another 4 here now just doing nothing
<magicaltrout> also kwmonroe if you look at the application status in Juju GUI
<magicaltrout> it says "uncommitted" whatever that means
<magicaltrout> also kwmonroe IRC needs a joy emoticon....
<magicaltrout> https://gist.github.com/buggtb/f7299a2116e96e63dc1dac20bfe5b137
<magicaltrout> juju gui refuses to delete the models
<magicaltrout> and juju cli thinks they have 0 machines
<magicaltrout> but juju cli does let me destroy the models
<magicaltrout> ah! So when deploying when I scale and select 1 unit per machine
<magicaltrout> it deploys the services correct
<magicaltrout> but if I deploy and select manually place, and then in the swimlanes select auto placement
<magicaltrout> it doesn't deploy the services
<kwmonroe> yeah, confirmed here magicaltrout
<kwmonroe> rick_h: haaaalp
<magicaltrout> ha
<magicaltrout> want me to file it on github?
 * rick_h walks back to the computer
<rick_h> what's up?
* rick_h changed the topic of #juju to: Get started with Juju https://jujucharms.com/docs/ | The Juju Show https://www.youtube.com/c/jujucharms
<kwmonroe> rick_h: start a new model at jujucharms.com, add zookeeper (or anything), update the deployment to 3 units, commit.
<kwmonroe> 3 machines will spin up, no apps get deployed
<kwmonroe> right magicaltrout?  is that about the simplest way to do it?
<kwmonroe> i confirmed with both zk and mongo charms
<magicaltrout> yeah, although when I did it I had to relate it to something
<rick_h> kwmonroe: k, what's the logs on the machine that comes up say?
<magicaltrout> but kev already had kibana running so who knows
<magicaltrout> also rick_h https://gist.github.com/buggtb/f7299a2116e96e63dc1dac20bfe5b137
<magicaltrout> the messed up models report 0 machines for the broken services
<rick_h> magicaltrout: yea, you might have to log out and back in to see something
<rick_h> magicaltrout: because of the outage/back up and such
<magicaltrout> even though my model and mymodel2 had 4 or 5 running
<rick_h> magicaltrout: I had errors until I re-authed
<magicaltrout> rick_h: this happened at 3pm
<magicaltrout> well before the downtime
<rick_h> magicaltrout: oic
<magicaltrout> and then post downtime
<rick_h> k, testing out kwmonroe's issue
<magicaltrout> aww shit kwmonroe even gets credited with it
<magicaltrout> today is a bad day
<kwmonroe> heh, lemme cycle my login and try again
<rick_h> sorry but it's my birthday and I'm packing for a trip. I only came back because of "haaaaalp"
<rick_h> so I was lazy and only read the backlog a little ways :P
<magicaltrout> hehe, i was just gonna file it
<magicaltrout> blame kev
<magicaltrout> i saw the bike getting packed, somewhere nice?
<rick_h> magicaltrout: was hoping so, ozark mountains (arkansaw)
<rick_h> magicaltrout: but it's been raining a solid week and raining tomorrow/sat
<rick_h> magicaltrout: so not sure how much riding we'll get to do...maybe some mudding?
<rick_h> plan was to get out of MI winter and it's warmer here than there right now lol
<rick_h> yea, something odd here
<rick_h> units still show as uncomitted
<rick_h> but the machines came up
<kwmonroe> ha!  mongo just came up for me.  maybe it's a zk thing.
<rick_h> hah, and hatch bailed
<rick_h> bastard
<kwmonroe> and i have to change locations for <reasons>.. be back asap.
<rick_h> k, I'll file a bug and get folks to look in the AM...damn. We've got a new gui that's hit release and wonder if this is still in there...
<rick_h> magicaltrout: what's your GH username?
<rick_h> you know why? because you didn't hit autoplace?
<rick_h> or did you do some placement stuff?
<magicaltrout> buggtb
<magicaltrout> well I hit autoplace on one of the tests I did
<magicaltrout> and it made no difference rick_h
<rick_h> so it works if I manually place
<rick_h> works if I add and click "one unit per machine"
<rick_h> and works if I hit manually place and then later click "auto place"
<rick_h> magicaltrout: do you have a solid failure repro steps?
<rick_h> basically the bad thing is that if I scale 3 new units it says it'll add-unit but since I never placed it that changelog is a lie
<rick_h> in that case it's "broken" but it's because I said manually place and then didn't
<magicaltrout> so i've literally just tried this
<magicaltrout> i've stuck 1 apache-drill and 3 zookeepers on my canvas
<magicaltrout> related them
<magicaltrout> said manual place
<magicaltrout> but then in the machines view i clicked the auto place button
<magicaltrout> oh i see what its doing
<magicaltrout> sorta
<magicaltrout> thats messed up
<magicaltrout> screen grab coming
<magicaltrout> https://imagebin.ca/v/3sdC6OOnVLBJ
<magicaltrout> look at this rick_h
<magicaltrout> i stuck the stuff on the canvas then clicked the auto place button
<magicaltrout> and the deploy
<magicaltrout> it fired up 4 machines
<magicaltrout> which are empty
<magicaltrout> then did something pretty weird with 2 zookeeper nodes
<magicaltrout> which don't exist
<magicaltrout> and wheres the other zk node and wheres the apache drill?
<magicaltrout> they literally don't exist
<rick_h> magicaltrout: hmm, can you file a bug in https://github.com/juju/juju-gui/issues/new with the step by step and I'll get someone to look into it please?
<magicaltrout> sure thing
<rick_h> ty!
<magicaltrout> filed
<magicaltrout> have fun packing
<magicaltrout> thanks for the help kwmonroe
#juju 2018-02-23
<kwmonroe> np magicaltrout, clearly my work here is done.
<kwmonroe> magicaltrout: what's the issue url that you filed?  i'd like to know where to send my bill for all these zookeeper machines that never came up.
<magicaltrout> hehe
<magicaltrout> apparantly hatch has fixed it already
<pekkari> Hi #juju, can anyone shed some light on the following paste? http://dpaste.com/0PAMSQY
<magicaltrout> it says its broken
<pekkari> well, I'm adding the code changes below to see if this is some thing I missed from how to setup a relation-changed hook in reactive charms magicaltrout
<pekkari> I suspect the issue is around line 63, on the number of parameters set magicaltrout, does it says something to you?
<sfeole>  does anyone know if there is any sort of revision tracking for juju charms, that corresponds to a specific commit in the charm source (on github  for example).  For example, charms have different versions charm/foo-1  -> charm/foo-2 -> charm/foo-3. Is there some sort of tracking that can tell me,  revision 2 contains all of specified fixes /commits in the source???
<jhobbs> the openstack charms have a repo-info file with that info in them
<jhobbs> which is super helpful; but I don't think there is a generic mechanism for that
<heckles1000> Hey guys! I just started looking over the bigtop charms and I have a few questions/comments, that I am going to fire off.
<magicaltrout> *boom*
<magicaltrout> this is like the slowest cannon ever
<heckles1000> apache software release vs charm release? is there any tracking going on between the charms and apache software, etc build system in place that builds releases charms when the apache software revs?
<magicaltrout> kwmonroe who deals with the juju charms is also on the bigtop PMC heckles1000
<magicaltrout> so there is reasonably close alignment
<magicaltrout> I don't know how quick the release -> charm update cycle is, but i'm sure its not that shabby
<heckles1000> oozie and other bigtop ecosystem software - do we have a contributing guidelines for charming bigtop things ... it seems some are getting sorely outdated .... some things are snapped some aren't, having a hard time finding consistency here I guess
<magicaltrout> I think stuff outside of the bigtop ecosystem are largley up to community commits so YMMV
<heckles1000> magicaltrout: for example https://github.com/juju-solutions/layer-apache-hadoop-namenode#overview shows hadoop 2.7.1
<heckles1000> is that just a crusty readme, or do you think the softwware is just not up to date with recent hadoop releases
<magicaltrout> ignore that
<magicaltrout> https://github.com/juju-solutions/bigtop/tree/master/bigtop-packages/src/charm
<magicaltrout> thats the namenode in the charmstore
<heckles1000> lol ... om thanks
<magicaltrout> if you find a charm in the charm store
<magicaltrout> most are linked to project homepages or bug trackers
<magicaltrout> assuming they've been maintained by the canonical folk
<heckles1000> so like ..... is there a place where the single source of truth for all of the legit bigtop charms live, I feel like Im sorting through a ton of charms ...some seem irrelevant and its difficult for me to determine what is legit and what is stale
<heckles1000> readmes like ^ in the charm repo dont help
<heckles1000> :)
<magicaltrout> i refer to the previous link ;)
<magicaltrout> that appears to be the source of the bigtop charms
<magicaltrout> although i've not committed any so i could be lying
<heckles1000> yea
<heckles1000> 8 months ago
<magicaltrout> also
<magicaltrout> https://github.com/juju-solutions/bigdata-community/wiki/Contributing
<magicaltrout> oh
<magicaltrout> link #1
<heckles1000> ahhh thank you
<magicaltrout> in that page
<magicaltrout> https://github.com/apache/bigtop/tree/master/bigtop-deploy/juju
<magicaltrout> seems likely
<heckles1000> ok thanks for those
<heckles1000> but everything looks stale
<heckles1000> like where is hadoop 3.0?
<magicaltrout> also https://github.com/juju-solutions/bigdata-community/wiki/Apache-Bigtop
<heckles1000> jeeze ok
<heckles1000> this is great info
<heckles1000> how can I help get this all organized
<kwmonroe> so... in the bigtop repo... there's bigtop-packages/charms and bigtop-deploy/juju
<kwmonroe> those are the source of truths for the juju / bigtop charms and bundles
<magicaltrout> kevs here, kev is the canonical source of the truth
<magicaltrout> or something like that
<kwmonroe> heh - thx for taking that fire magicaltrout :)
<magicaltrout> also heckles1000 Bigtop 1.2.1 the latest release
<kwmonroe> heckles1000: as for crustiness, bigtop does indeed lag behind upstream
<magicaltrout> is Hadoop 2.7.3
<magicaltrout> https://cwiki.apache.org/confluence/display/BIGTOP/Bigtop+1.2.1+Release
<kwmonroe> so like, hadoop-3 has just started being worked in bigtop
<kwmonroe> heckles1000: how did you find your way to https://github.com/juju-solutions/layer-apache-hadoop-namenode#overview?  any -apache- charms are our previous gen (before bigtop) and should have been deprecated in favor of bigtop versions, eg, hadoop-namenode (good) vs apache-hadoop-namenode (bad)
<heckles1000> sorry I didnt mean to call it crusty. I just realized there is a juju quickstart in the readme. I spent the last hour trying to use juju quickstart only realize it doesnt exist
<heckles1000> oh but I was looking at the wrong readme
<kwmonroe> ha!  doh.  so again, how did you come across that readme for layer-apache-hadoop?  i'll update links as needed (unless google took you there)
<heckles1000> I see what your saying, that is probably why im confused looking at both new and old stuff
<heckles1000> google
<kwmonroe> def switch to bing
<heckles1000> I didnt know the source for the new new. Thank you for the link
<kwmonroe> using bing, you'll never come across the old stuff (or probably anything, really)
<heckles1000> while I got you here do you mind if I ask a few more questions?
<magicaltrout> don't ask if you can ask
<magicaltrout> just fire them off ;)
<magicaltrout> the worst you'll get is silence :)
<kwmonroe> yeah heckles1000, fire away.  magicaltrout is as frisky as he's gonna get.
<magicaltrout> kwmonroe's probably more impressed to find someone actually using his code ;)
<heckles1000> Is there documentation for how to physically configure the nodes aka how do i provide disk for hdfs through the charm.  I see there is a thing called juju storage, how do i use it with hdfs
<heckles1000> you guys are great lol
<kwmonroe> heckles1000: that one is super simple!  you can't.  the bigtop charms don't have any juju storage wired in.
<heckles1000> how do i tell hdfs which disk to use?
<kwmonroe> heckles1000: at the moment, hdfs is configured on the root disk of all datanodes
<heckles1000> how do i give it disks then? do you just deploy  nodes with super large extra large super resilient filesystem?
<heckles1000> root filesystem:
<kwmonroe> heckles1000: hdfs will handle the resilient part for you.  as for the super large part, when you deploy the datanode (hadoop-slave is what we deploy by default, it includes datanode and nodemgr), you can specify a root-disk constraint to whatever size you need.
<kwmonroe> the datanodes are configured to use /var/lib/hadoop-hdfs/data on their root filesystem
<heckles1000> I am using maas and my hdfs server has 12 disks each
<heckles1000> can i configure maas to just  mount the array there
<heckles1000> servers*
<kwmonroe> hmm.. you lost me.  are you saying each of your servers has 12 disks?
<heckles1000> yes. I have 10 data nodes each has 64gb satadom for the os and 12 6tb for hdfs storage
<heckles1000> im trying to use maas to create a softraid across the disks, then juju storage to mount the array to the application/machine
<heckles1000> it doesnt sound like this is a thing yet?
<kwmonroe> heckles1000: i don't know if maas can configure/expose storage for you.  you could create an LVM array on each of your datanodes, add your 12 disks as PVs in a VG, create a monster LV, mount that as /mnt/mybigLV, then  "rmdir /var/lib/hadoop-hdfs/data/dir && ln -s /mnt/mybigLV /var/lib/hadoop-hdfs/data/dir"
<heckles1000> would I be able to help contribute to making juju storage work with these charms, or is it something I would have to wait for the charms to grow storage capability ?
<heckles1000> ahh I see
<kwmonroe> heckles1000: it may be a thing -- i'm just not really familiar with maas
<jhobbs> you can tell maas to make a software raid partition and tell it where to mount it
<jhobbs> that should be fine
<kwmonroe> ah cool jhobbs - thanks!
<kwmonroe> so yeah heckles1000, however you make the LV, mount it on the datanode and symlink the mountpoint over to $dfs.datanode.data.dir
<kwmonroe> fwiw heckles1000, this is just how you'd have to do it today.  proper storage support (juju storage) is a way better option, it's just not there today.
<heckles1000> jhobbs, kwmonroe: so I can use maas + juju to do half of that process then by making the raid and mounting it using maas + juju storage
<heckles1000> then just symlink the dirs ^^
<jhobbs> i don't know anything about these charms, sorry :/ but maas will make the raid set, format it however you tell it to, and mount it wherever you want it to; that will all be done before juju and the charms take over
<heckles1000> awesome
<heckles1000> jhobbs: thanks
<kwmonroe> heckles1000: the missing piece is that the charms wouldn't know what to do when you attached storage to them.  ideally, they'd have to do the datanode config to set the mount point, etc.
<heckles1000> I see
<heckles1000> is it just 1 single config that specifies the mountpoint?
<kwmonroe> i think so heckles1000, see dfs.datanode.data.dir:  https://hadoop.apache.org/docs/r2.7.3/hadoop-project-dist/hadoop-hdfs/hdfs-default.xml
<bdx> oh awesome!
<kwmonroe> may also need to tweak some of the fs-limits
<bdx> lets get this going
<bdx> I need that too
<bdx> kwmonroe: slacking
<kwmonroe> heh
<kwmonroe> i thought you needed network spaces bdx!
<bdx> I need it alllll
<kwmonroe> you can't possibly want 2 things in a week
<magicaltrout> it means 2 people use kevs code
<magicaltrout> he must be due a raise
<kwmonroe> half of 'em are using the old stuff that cory_fu wrote though :/
<kwmonroe> there goes my moneies
<magicaltrout> just buy him a fedora or 7
<kwmonroe> lol
<cory_fu> ha
<heckles1000> kwmonroe: how can I modify the charm to set $dfs.datanode.data.dir config value?
<kwmonroe> heckles1000: another simple one!  you can't.  but if you open an issue for me here, https://github.com/juju-solutions/bigtop/issues, i'll see what i can do ;)
<kwmonroe> heckles1000: the datanode config is hard coded by bigtop here:  https://github.com/apache/bigtop/blob/master/bigtop-packages/src/common/hadoop/conf.pseudo/hdfs-site.xml#L48
<kwmonroe> so while you can't change it, you *can* do filesystem tricks to make it do what you want... hence the earlier workaround to symlink wherever your big 'ol mount point is over to that hard coded location ^^
<kwmonroe> here's the workaround all in one go, assuming your big mount point is mounted at /mnt:  juju run --all "mv /var/lib/hadoop-hdfs /mnt && ln -s /mnt /var/lib/hadoop-hdfs"
<heckles1000> kwmonroe: sweet that doesn't sound too bad.
<kwmonroe> that'll be $300
<heckles1000> kwmonroe: I do have something for you ...
<kwmonroe> oh dear
<heckles1000> kwmonroe: https://github.com/juju-solutions/bigtop/issues/53
<kwmonroe> that'll be $500
<heckles1000> haha
<heckles1000> kwmonroe: thats great, thanks again for the workaround
<kwmonroe> np heckles1000 - thx for opening the issue.  i'll get that scheduled in soon.
<kwmonroe> bdx: PRs welcome for https://github.com/juju-solutions/bigtop/issues/53 ;)  i'll give you $40.
<magicaltrout> i'm skint i'll do it for $30
<kwmonroe> has to be a *mergeable* PR.  sorry magicaltrout.
<magicaltrout> hey at least you don't need to get an ICLA off me :P
<bdx> kwmonroe: aha, I gotchu https://github.com/juju-solutions/bigtop/issues/54
<kwmonroe> um, bdx, that is the opposite of a PR
<magicaltrout> lol
<bdx> kwmonroe: is the data-node provided by hadoop-slave?
<kwmonroe> yeah bdx, the slave includes layer-datanode and layer-nodemgr.  those respective sources live at https://github.com/juju-solutions/layer-hadoop-datanode and https://github.com/juju-solutions/layer-hadoop-nodemanager
<bdx> cool, thanks
<kwmonroe> bdx, it's not ideal, but charm build can't pull layers from repo subdirs, so we needed those layers in their own repos.
<bdx> totally
<bdx> is layer-hadoop-datanode used in any other bigtop charms?
<kwmonroe> no
<bdx> it seems sensible to add the storage bits to the layer-hadoop-datanode .... but I mean .... it seems more sensible to add them to the top layer
#juju 2018-02-24
<kwmonroe> yeah bdx, i need to think about it a bit.  one the one hand, you *could* just have the datanode storage dir set to a specific place.  but we do put other stuff in there, like namenode bits, cache, checkpoint data, etc.  so maybe it makes more sense to have a single storage option for "/var/lib/hadoop-hdfs" that lives in the bigtop base layer, rather than "/var/lib/hadoop-hdfs/cache/$user/dfs/data" that would live in the
<kwmonroe> datanode layer.
<bdx> bundles don't support storage do they?
<bdx> kwmonroe: I think all you need is https://github.com/jamesbeedy/layer-hadoop-datanode/blob/add_hdfs_storage/reactive/datanode.py#L14,L16 and https://github.com/jamesbeedy/layer-hadoop-datanode/blob/add_hdfs_storage/reactive/datanode.py#L64,L69
<bdx> ^ may get it real close, going to find a way to test this
<kwmonroe> sweeeet bdx!  i like how simple that is versus exposing the hdfs-site.xml config.
<kwmonroe> i also like how i didn't have to write anything ;)
<kwmonroe> $40 coming to you at the next summit
<bdx> well, I mean
<bdx> write me a bundle that uses storage
<bdx> ahah
<bdx> :)
<bdx> I've never seen storage in a bundle I don't think
<bdx> possibly ill just deploy the bigdata bundle with my built charm inplace of the hadoop-slave that coems with the bundle
<bdx> then just add the storage following the bundle deploy
<kwmonroe> bdx: looks like the storage key is valid in a bundle: https://github.com/juju/charm/blob/v6/bundledata_test.go#L41
<kwmonroe> but i haven't personally seen bundles with storage either
<bdx> oh, thats what I was looking for
<bdx> so for what I'm doing it would be
<bdx> storage:
<bdx>   hdfs: ebs,10G
<bdx> ill try it out
<bdx> applications:
<bdx>   postgresql:
<bdx>     charm: cs:postgresql-171
<bdx>     num_units: 1
<bdx>     series: xenial
<bdx>     storage:
<bdx>       pgdata: ebs,10G
<bdx> boom
<kwmonroe> nice
<kwmonroe> bdx: i need to head out, but thanks a lot for making my dreams come true.  have a good weekend!!
<bdx> kwmonroe: backatcha
<bdx> http://paste.ubuntu.com/p/D5KtMbDGnP/
<bdx> working bundle^
<bdx> ill pr that in a few
<bdx> thanks again
#juju 2020-02-17
<thumper> timClicks: now?
<timClicks> sure
<timClicks> thumper: see you in 1:!
<tlm> babbageclunk: got 5 minutes after standup? Need to pick your brain about manifolds
<babbageclunk> tlm: yup yup!
<wallyworld> thumper: a multiwatcher race fix https://github.com/juju/juju/pull/11223
<thumper> will looks
<wallyworld> thumper: ah, i just pushed a different change before i saw your comment
<wallyworld> i can revert to what i had originally, or keep what i have now
<wallyworld> my original change is safer because it handles all callers
<wallyworld> whereas now in the second attempt, it only fixes just one caller and other can still do the wrong thing
<thumper> wallyworld: I looked at the call site, and there are potentially long blocking things there
<thumper> so decided that your original approach was good
<thumper> babbageclunk: is the release going smoothly?
<babbageclunk> thumper: huh - apparently not :(
<babbageclunk> https://jenkins.juju.canonical.com/job/release-juju-stable/114/
<thumper> :(
<babbageclunk> digging now
<babbageclunk> I don't get it - wait-for-operation finished but the proposed streams json file doesn't contain entries for 2.7.2
<thumper> https://github.com/juju/juju/pull/11224 forward port of older PR
<babbageclunk> Ok, I can see in the update-streams log that it's written a log line for this request
<babbageclunk> but the json file says updated on 17th of Jan
<babbageclunk> https://streams.canonical.com/juju/tools/streams/v1/com.ubuntu.juju-proposed-tools.json
<babbageclunk> http://streams.canonical.com/juju/new-tools/juju-dist/tools/update-streams.log
<thumper> is it putting in proposed or released?
<babbageclunk> at this point in proposed
<babbageclunk> ok, digging around in the code isn't really helping - rubber ducky?
<wallyworld> babbageclunk: it looks like the job was run with agent_stream=proposed
<wallyworld> surely it should be released?
<babbageclunk> yeah, I think that's right at this point isn't it?
<wallyworld> no
<babbageclunk> I think it tests them and then updates to released
<wallyworld> we are updating released streams for a released build
<wallyworld> really?
<babbageclunk> looking at the jobs again...
<wallyworld> i could very well be wrong
<babbageclunk> from this list of jobs https://jenkins.juju.canonical.com/job/release-juju-stable/
<babbageclunk> It looks like it releases them to proposed and then to released.
<wallyworld> ok, that i didn't realise
<babbageclunk> And it's the first bit that doesn't seem to have worked - the s3 stuff has 2.7.2 and a recent update time, but streams.canonical.com doesn't.
<babbageclunk> I'm going to increment the poke value and resume the build, maybe it was an eventual-consistency thing?
<wallyworld> tlm: did you run into this issue last release? can you recall what was done?
<tlm> wallyworld: catching up. We did have issues with this. It does go proposed => released from memory
<wallyworld> yeah, i was totaslly wrong
<tlm> I think I was updating the poke value
<wallyworld> but it's not even going to proposed atm
<babbageclunk> yeah, I'm rerunning it with poke=1 now
<tlm> I think jerff gave me issues on release and simon had to get someone to do something on some server somewhere but I could be very wrong as well
<wallyworld> hpidcock: the default task status is Completed. did you specify Running and still not get results?
<wallyworld> i can double check to be sure
<hpidcock> oh right. wallyworld I was just expecting `juju operations` to show all tasks
<wallyworld> the actions api (currently unused but there) defaulted to Completed so i was just mirroring that
<wallyworld> i think the idea was the people would usually want to see stuff that had finished
<wallyworld> but i'm guessing
<hpidcock> i think we should assume all
<wallyworld> we can do that and ask for feedback
<wallyworld> the original api is 5 years old and never really used
<hpidcock> yeah lets go with what feels right and nice
<hpidcock> like kubectl doesn't just show pods that are running when you go kubectl get pods
<wallyworld> i was debating which was better but i think it's less of a surprise to go with all
<wallyworld> get pods vs list a large set of things is different though
<hpidcock> you can have like a 1000 pods in a k8s cluster
<hpidcock> operations are also an aggregate
<wallyworld> especially with all the extra juju operators :-)
<hpidcock> I think we need some garbage collection on the list
<wallyworld> we have it
<wallyworld> there's a pruner worker
<hpidcock> well then that sounds like the list will be somewhat small
<wallyworld> depends - prune > 3 days old i think
<hpidcock> it's also a model based command right?
<wallyworld> yep
<wallyworld> i added the batching stuff jus in case
<wallyworld> we can always wrire that up if needed
<zeestrat> Got a couple of PR's for Ansible in charm helpers if anyone could help with review and merge: https://github.com/juju/charm-helpers/pull/403, https://github.com/juju/charm-helpers/pull/404
<nammn_de> manadart: if you find some time later could you take a look and see whether those API params and result changes in params/network.go makes sense? https://github.com/juju/juju/pull/11213/files#diff-b8ec81ce097ebb5fa3ef9f36b26bd128R934 With this I could start working on the cmd patch tomorrow or later after ci
<manadart> nammn_de: Yes; though it looks like I will be working the release today.
<nammn_de> manadart: sure no worries
<anastasiamac> babbageclunk: want to catch up at some point?
<babbageclunk> anastasiamac: yup yup
<babbageclunk> might need to be getting the release sorted first though :(
<anastasiamac> babbageclunk: awesome \o/ ping when u r free
<evhan> I can't seem to find docs for the right form for "serviceAccounts" in the podspec now.
<evhan> I know it *was* "serviceAccount", and I thought I then had it working with an array under "serviceAccounts", but that doesn't seem to be right (anymore?).
<evhan> i.e. following this structure: https://discourse.jujucharms.com/t/updated-podspec-yaml-new-features/2124
<evhan> (Under "Workload permissions and capabilities".)
<wallyworld> evhan: from memory, the core yaml has serviceAccount (singular) which is just for the main workload pod. the additional k8s specific yaml has serviceAccounts (plural) to allow additional, arbitrary onces to be created
<wallyworld> eg https://pastebin.ubuntu.com/p/FC3hDhJbzt/
<wallyworld> kelvinliu: i think your vsphere pr is good to land? we can then do a 2.7.3 candidate snap
<anastasiamac> wallyworld: btw ... âHâAâPâPâYâ BâIâRâTâHâDâAâYâââª
<wallyworld> i was hoping to keep it quiet
<tlm> happy birthday wallyworld
<wallyworld> now the cat is out of the bag
<anastasiamac> haha can't keep it quiet! if not be loud now, then when? :D
<kelvinliu> wallyworld: yep
<wallyworld> at my funeral
<kelvinliu> wallyworld: happy birthday! ð
<wallyworld> ty
<babbageclunk> yay happy borthday wallyworld! Let's all sing at standup!
<evhan> wallyworld: Yeah, I had that but get json: unknown field "serviceAccount" (from operator master).
<evhan> Also, happy birthday. :)
<evhan> Hmmm, that's what the provider expects too (ref. https://github.com/juju/juju/blob/develop/caas/kubernetes/provider/specs/v2_test.go). Maybe I'm doing something dumb here.
<wallyworld> evhan: sorry, was in meeting. can you share the yaml you are trying? kelvinliu can help as well
<kelvinliu> wallyworld: sure
<kelvinliu> evhan: the top level is serviceAccount, but the one under kubernetesResources is serviceAccounts (because it's an array)
<evhan> https://paste.ubuntu.com/p/W5Fpj4NqMk/
<evhan> I noticed the model was on 2.7, so I'm just trying to upgrade and go again on 2.8-beta1.
<kelvinliu> evhan: u have to use `version: 2` k8s spec
<evhan> Hrnnnnn I've run into this before too
<evhan> Thank you.
<kelvinliu> the legacy doesn't support rbac
<kelvinliu> np
<evhan> Right, I left that key behind when moving values around. Sorry for the noise.
<wallyworld> evhan: there's some doc improvements we are woring on. everything is a bit scattered on discourse
<timClicks> "a bit"
<wallyworld> well i had all the main doc posts pinned but they got unpinned :-(
<timClicks> imo a good intermediate step would be edit cory john's post and remove all of the other pins
<timClicks> because 9 pinned posts isn't effective either
<evhan> FWIW when looking for help w/ k8s configs, all roads tend to lead to the "Updated pod spec yaml" post above.
<evhan> Via search and links from other posts.
<evhan> https://discourse.jujucharms.com/t/updated-podspec-yaml-new-features/2124 <----- that one
<wallyworld> getting stuff off discourse and properly into docs is te best solution IMO. discourse posts is a terrible medium for docs
 * timClicks nods
<timClicks> ^ emphasis on the "intermediate"
<hpidcock> kelvinliu: can you please do a 2.7 -> develop merge, there is a lot of k8s changes from you that need merging
<anastasiamac> babbageclunk: about NewRestroreCommand and export_test... m k to pass in a connect func but it'll need to b accessible for main.go too... so
<anastasiamac> babbageclunk: r u k for me to have connect exported from a database?
<kelvinliu> hpidcock: sure
<hpidcock> kelvinliu: thank-you
<kelvinliu> np
<babbageclunk> anastasiamac: yeah, I think that's fine
<anastasiamac> k
#juju 2020-02-18
<evhan> After adding storage to a running k8s charm, is it necessary to perform some manual action, (e.g. cycle the units)? I have a set of "storages" in pending after having added storage, but can't see any corresponding activity and I'm unsure how to make it go.
<wallyworld> evhan: how did you add the storage? juju attach-storage isn't supported for k8s units as all units need to be homogeneous. did you use kubectl patch on a specific pod?
<evhan> No, introduced to metadata.yaml and upgrad-charm'ed.
<evhan> Sorry, should have mentioned that.
<evhan> I suppose with the CLI tools it would normally be `juju add-storage` followed by `attach-storage`, in which case I would have gotten the message that it was unsupported.
<wallyworld> in metadata it declares it can support storage but you need a --storage argument to allocate storage against the declaration. did you use --storage when upgrading?
<evhan> Yeah, sorry, the command was: `juju upgrade-charm --path . fluent-bit-k8s --storage varlog=20M`
<wallyworld> evhan: it may well be that's not something we've proprtly tested and there's a bug to fix, we'll have to look at it
<evhan> OK, I'll try to collect some more info. Playing around a bit more I think I may have taken the side door and avoided all the "you can't do that" signs.
<tlm> wallyworld, babbageclunk: what is the best method to work out the current controller hostname/ip:port in workers ?
<babbageclunk> tlm: hmm, not totally sure - maybe from the agent worker? Although that's machine level - would need to check. wallyworld?
<wallyworld> the api port is a controller config setting
<wallyworld> there's typically no hostname, only ip. there's a controller nodes collection, and also machines. you get the machine if from the controller tag
<tlm> yeah ip is fine
<tlm> what do we report when the controller is inside k8s
<tlm> api server really wants a service in that case but I am tossing up if I should try and do that kung fu
<hpidcock> tlm: if we know the controller is in the same cluster, maybe we use the service directly, if the controller is external, we can create a service and publish endpoints
<wallyworld> we record the service ip address in state
<wallyworld> for the controller, there's a cloudService collection
<tlm> hpidcock: yep thats fine but still need to get the ip
<wallyworld> i can't recall off hand what the doc id is for the controller
<wallyworld> we use cloudService for the service address for all deployed apps
<tlm> does the data already exist in the machine manifolds ?
<tlm> just need ip and a bool of current controller if it's inside of k8s
<wallyworld> you need to query state in the manifold start func where the statepool / state object is passed in
<wallyworld> use have the controller tag, so can get the key of the controllerService doc to query
<tlm> ok will take a look. Going to hard code an ip in and get this finished then swing back to it
<wallyworld> righto
<wallyworld> kelvinliu might remember off the top of his head what the cloudService doc id for the controller is
<wallyworld> to allow the service ip to get retrieved
<kelvinliu> wallyworld: cloudService containers the svc info but not container info
<wallyworld> that's what we want
<wallyworld> the service ip address
<tlm> are we talking kube service ?
<wallyworld> of the controller
<wallyworld> we talk via the service ip always
<tlm> if so do we make a service even if the controller is not in k8's ?
<kelvinliu> right, cloudService .Addresses()
<wallyworld> what's the doc key though?I can't recall offhand
<hpidcock> wallyworld: when your free next https://github.com/juju/juju/pull/11214 no rush, I'm working on the next PR and will assume this is "done done"
<kelvinliu> https://github.com/juju/juju/blob/develop/state/state.go#L1135
<kelvinliu> it's controller UUID for k8s controller
<wallyworld> ah yeah, thanks
<tlm> thats k8's specific
<tlm> doesn't apply when we are not running the controller in k8's
<wallyworld> right, but non k8s deployments do not write anything to the cloudService collection
<wallyworld> the cloudContainer collection is also k8s specific
<tlm> no what hpidcock and I are thinking is, for every controller in juju we make a service in each k8's cluster we have. This service either points to the internal controller or an external one. We can then just piggy back off of this known point for things like admission/sidecars etc etc
<wallyworld> and this is a k8s only worker right?
<tlm> yep
<wallyworld> depending on where the controller is running, i think we'll need to get it's ip address from either the machine collection or the cloudService collection
<tlm> we'll only need ip for outside of k8s. Maybe we HO this at some stage to make sure it's a good idea and not cup holders
<wallyworld> i'm free whenever
<tlm> ok joining :)
<tlm> hpidcock: want to join ?
<hpidcock> sure why not
<kelvinliu> https://github.com/juju/juju/pull/11226 merge 2.7 to develop, anyone free to take a look? thanks!
<hpidcock> kelvinliu: thanks for getting that all merged. only one issue
<kelvinliu> thx, just found the problem, fixed
<hpidcock> wallyworld: only reason to move up the merge of that PR is an intermittent test failure fix for ActionSuite.TestFindActionTagsByLegacyId
<wallyworld> ok, i'll look
<hpidcock> merge on https://github.com/juju/juju/pull/11224 failed due to it
<wallyworld> hpidcock: didn't quite get done reviewing, have to EOD, will finish tomorrow
<nammn_de> manadart: I opened a initial move-to-space apiserver patch. I do have some unclear things which I mentioned in the patch as well as in the network cli spec. Could you take a look please? https://github.com/juju/juju/pull/11213
<nammn_de> manadart achilleasa: this is the command part patch of the remove-command.  I have linked the spec as well https://github.com/juju/juju/pull/11183 If someone wants to take a look at the spec instead/with
<manadart> nammn_de: I'll take a look when I can.
<achilleasa> jam: did you have a chance to take a look at my replies in 11204? Wdyt about renaming the call to "CommitHookChanges"?
<jam> achilleasa: offhand that sounds pretty good. I'm in a bit of a crunch right now, but I'll try to get to it this afternoon
<stickupkid> jam do you have the rights to merge this? https://github.com/go-goose/goose/pull/77
<stickupkid> or rick_h
<hml> stickupkid: I think I do
<hml> stickupkid: merged
<stickupkid> hml, tyvm
<stickupkid> yay unblocked myself
<stickupkid> win win
<manadart> nammn_de: I think the sections should be "Setting units to track a branch", "Changing application config on a branch" etc, rather than "Units are made to track the branch", "Changes are made to the branched application" and so on.
<nammn_de> manadart: rgr
<nammn_de> manadart: Need some input on `6. Aborting a branch`. After trying out it seems that we can only abort a branch when no units are following it. The error message indicates that resetting the settings is working as well. Did not work for me
<manadart> nammn_de: Not sure what you mean. If units tracking the branch are set to track "master", then we should be able to abort it.
<manadart> If the branch has no config changes, then committing the branch is an effective abort.
<nammn_de> manadart: ahh making them track master. Somehow this slipped me :D. Will add that to the post.
<nammn_de> But the abort message returns "ERROR branch is in progress. Either reset values on tracking units or remove them to abort." Which indicate if I reset the values I can run `juju abort <branch>` again, which is not the case
<manadart> nammn_de: I would have to read the code.
<nammn_de> manadart: ahh I am just meaning this independant of code. Just from an user perspective
<nammn_de> was more of a concern
<hml> quick backport review pls: https://github.com/juju/juju/pull/11228
<hml> another quick backport review pls: https://github.com/juju/juju/pull/11229
<hml> a third is coming once 11229 lands
<nammn_de> just wanted to point out, that aborting `juju abort` a branch seems to not work that easily.  Tried different possibilites. https://paste.ubuntu.com/p/Rsf5XDPrWZ/
<manadart> hml: Approved the first one.
<manadart> nammn_de: I remember now. Since we are going to be doing charm upgrade rollout using branches, you have to see them through.
<manadart> nammn_de: Have you read the spec? I'll dig it up.
<nammn_de> manadart: No only the doc. If you have the spec I can read it up.
<nammn_de> My point is mostly that the returning error message is confusing . As it indicates that resetting helps
<hml> manadart:  ty
<stickupkid> manadart, you got 5-10 minutes to pair this openstack port changes, want to make sure I'm getting it right?
<manadart> stickupkid: Yep.
<nammn_de> hml: you worked on this before afaict. Does this patch make sense to you? https://github.com/juju/juju/pull/11230
<hml> nammn_de: looking
<hml> nammn_de:  approved
<nammn_de> hml: thanks!
<rick_h> achilleasa:  hml was just going to ping that anything we can backport into 2.7.3 that we want folks to have charm-wise would be great. I saw hml had a couple of them going so <3
<achilleasa> stickupkid: so... remember that interesting test with leadership?
<stickupkid> achilleasa, yeah
<stickupkid> better be good
<hml> merge conflicts fixed for pr, rebase review please: https://github.com/juju/juju/pull/11229
<stickupkid> looking for a book type story out of this achilleasa
<achilleasa> looks like LeadershipClaimer hits the one in state/leadership.go but LeadershipChecker doesn't :D
<stickupkid> HA
<achilleasa> all I need to do is find out why this is set up like that...
<stickupkid> that'll be why it couldn't find it then
<achilleasa> I verified that the checks use the same model UUID
<stickupkid> BECAUSE IT'S CRAP
<stickupkid> haha
<freyes> hello, I was taking a look to the function AllSpaces() which should return the spaces in a model, but I'm not seeing any filtering by model uuid, am I missing some context set somewhere? - https://github.com/juju/juju/blob/2.7/state/spaces.go#L288
<achilleasa> if achilleasRunsTests { leadershipChecker = makeRandomChecker() }
<stickupkid> achilleasa, hahaha yeah
<achilleasa> freyes: you would normally run that on a state instance obtained for a particular model; it will filter by uuid behind the scenes for you
<freyes> achilleasa, ok, so I should go back on call stack to figure out where the state object is being built, right?
<freyes> achilleasa, what happens if there is state instance defined? should be failing with error?
<achilleasa> freyes: if you are debugging an API call you will normally find this call in the facade code (apiserver/facade/{agent, client})
<freyes> *there is no state
<freyes> achilleasa, I'm looking into the upgrades steps
<freyes> if you want more context, I'm troubleshooting this error: https://bugs.launchpad.net/juju/+bug/1863777
<mup> Bug #1863777: getting machine upgrade ops: space with name:  not found <sts> <upgrade-juju> <juju:New> <https://launchpad.net/bugs/1863777>
<achilleasa> freyes: hmmm... I guess this is the one you are looking for: https://github.com/juju/juju/blob/develop/upgrades/steps_27.go#L28-L33
<achilleasa> and https://github.com/juju/juju/blob/develop/upgrades/steps_27.go#L42-L48 (most probably)
<freyes> right, it's where I started from, I should probably keep reading that code and probably study it with a debugger, it will probably be easier
<freyes> achilleasa, appreciated, and congratulations on your book :-)
<achilleasa> freyes: try searching in the machines collection. I think the error might be refering to the addresses array in the machine docs
<achilleasa> freyes: thanks ;-)
<freyes> achilleasa, ah, interesting what you just mentioned .... it was recently mentioned to me they manually added new nics to the controller machines, so probably the ip addresser registered them and it's expected to have that space
<manadart> freyes: What has happened here is that the space name has been supplied with machine addresses by MAAS, but reload-spaces has not consumed MAAS' space topology into the collection.
<freyes> manadart, but probably because maas is not aware that the controllers machines have nics in the maas-mgmt space since they were hotplugged
<freyes> https://bugs.launchpad.net/juju/+bug/1863777/comments/1
<mup> Bug #1863777: getting machine upgrade ops: space with name:  not found <sts> <upgrade-juju> <juju:New> <https://launchpad.net/bugs/1863777>
<freyes> so this apparent inconsistent state is the one triggering the issue
<manadart> freyes: I think MAAS and Juju are aware, given that addresses in the machines collection are decorated with the name. But without running juju reload-spaces on the (controller) model, it is not in Juju's spaces collection, so when we try to look up an ID for it, it is not found.
<freyes> I see, I think I have what's needed to come up with a reproducer, I will post my progress in the bug
<freyes> thanks, manadart
<manadart> Sure thing freyes.
<achilleasa> stickupkid: so the checker is from the apiserver package....
<stickupkid> achilleasa, is it the raft one?
<achilleasa> stickupkid: I can't figure out where it gets initialized. So I took a shortcut by patching the suite state and creating a new uniter API.... looks like it gets the state ones now...
<stickupkid> ha
<achilleasa> yeah... you normally get back a "uniter state" instance from these calls which seems to be a shim but I can't really tell...
<stickupkid> hml, done
<hml> stickupkid: ty!
<hml> stickupkid: turns out my 3 backports have overlap enough for them needing to be done in serial.  :-(
<timClicks> question for the audience - which juju plugins do you use? (did you know that plugins were a thing?)
<evhan> As of yesterday. :)
#juju 2020-02-19
<tlm> hey kelvinliu_ what version of k8s.io should we be running at the moment in juju based on your last PR ?
<kelvinliu_> tlm:  1.17
<tlm> ta think I did a bad merge
<kelvinliu_> the current version is comentted  in the toml file
<kelvinliu_> ah?
<tlm> all good will squash mine with develop
<kelvinliu_> ok
<timClicks_> wallyworld: did moving the links to the bottom of the install page make things easier for you to update the installer links?
<wallyworld> timClicks_: not sure, i exepcted to find them where they appeared in the final render and had to go hunting for them
<wallyworld> but maybe that was a first time thing
<anastasiamac> babbageclunk: PTAL https://github.com/juju/juju-restore/pull/5
<babbageclunk> ting an l
<anastasiamac> :)
<timClicks_> wallyworld: you must have missed that comment at the top .. "ARE YOU UPDATING THIS DOCUMENT WITH A NEW RELEASE? Ctrl+F "LINKS TO CHANGE""
<wallyworld> i did
<wallyworld> my fist instinct is to change stuff in situ so i can see the output on the rhs pane
<wallyworld> and you think boys read directions? :-D
<anastasiamac> yes, m sure boys do :)
<anastasiamac> also they r very good when given a to-do list :)
<anastasiamac> babbageclunk: \o/ thnx :) i've addressed and will land
<kelvinliu_> wallyworld: im still working on the extend fileset stuff, i don't think i will have time to do the focal build today, should I do it tmr or handover to someone else?
<wallyworld> let's leave for now
<wallyworld> we need to solve before 2.7.3 at the latest, would be nice to do for 2.7.2 is something quick
<wallyworld> *if
<zeestrat> timClicks_: re plugins, I have a small port forwarding plugin for troubleshooting because I run my local dev inside Vagrant.
<kelvinliu_> ok
<timClicks_> zeestrat: that's neat
<hpidcock> wallyworld: if you get a chance to give the green tick on https://github.com/juju/juju/pull/11214 that would be awesome
<wallyworld> will do, wiull look now and after dinner if i don't get it done immediately
<wallyworld> hpidcock: gotta run for dinner, left 2 comments, back in an hour or so
<achilleasa> manadart: the test from https://github.com/juju/juju/commit/d8666bd39bb1401ecc95d7144e3cf2429549fc15 probably needs jc.SameContents; I am occasionally getting https://paste.ubuntu.com/p/bsHMQPz7Gy. I will add a drive-by fix in the PR I am working on ATM
<manadart> achilleasa: Ack. Thanks.
<stickupkid> manadart, you see this https://discourse.jujucharms.com/t/juju-2-7-2-release-notes/2667/2?u=simonrichardson
<stickupkid> that should have been, "have you seen this" - need some wake up juice
<manadart> stickupkid: I gave him a work-around in another post. I will follow up.
<manadart> https://discourse.jujucharms.com/t/juju-upgrade-controller-fails-space-not-found/2559/13
<nammn_de> manadart: time later for a cr on this one? https://github.com/juju/juju/pull/11183
<nammn_de> Just for heads up, I talked to rick_h yesterday about move-to-space, so it will come up during daily again as we weren't 100% sure about some possible outcomes
<manadart> nammn_de: I will look when I can.
<nammn_de> manadart: sure, take your time. Thanks!
<stickupkid> achilleasa, on your travels in spaces etc, did you see any place where we generated random names/ids
 * stickupkid wonders if names repo does this 
<stickupkid> I'm loathed to use https://github.com/juju/utils/blob/master/randomstring.go
<achilleasa> stickupkid: random? nothing rings a bell
<stickupkid> manadart, random string of 8 chars of go full bore on a uuid
<manadart> stickupkid: I say 8 chars. 2 UUIDs is a bit long. BTW, your patch was hanging at "starting" for me. I am working off this change set ATM: https://pastebin.ubuntu.com/p/fKMYwt3mgH/
<stickupkid> manadart, k, will fix
<stickupkid> manadart, applied the patch, but these are annoying changes
<stickupkid> -	for k, subnetIDs := range subnetIDsForZone {
<stickupkid> +	for i, subnetIDs := range subnetIDsForZone {
<stickupkid> :|
<manadart> stickupkid: Change it back. I don't care.
<stickupkid> manadart, haha
<manadart> stickupkid: I realise it's a map not a slice, so I needn't have changed it.
<stickupkid> manadart, i've fixed the tests
<manadart> stickupkid: BTW, it looks like we have to omit both external and FAN networks in our considerations.
<stickupkid> manadart, I've landed your patch with the test fixes https://github.com/juju/juju/pull/11188
<manadart> stickupkid: OK. I will update here and work off your changes. Still debugging.
<manadart> stickupkid: Boom. https://pastebin.ubuntu.com/p/ct6SkQWf6c/
<stickupkid> WOW
<stickupkid> manadart, ho?
<stickupkid> manadart, https://github.com/go-goose/goose/pull/78
<manadart> jam: Do you have permission to merge there? ^
<rick_h> morning party folks
<rick_h> stickupkid:  what are we giving random names to?
<stickupkid> rick_h, port name for openstack, needs to be namespaced by a model uuid and a network id, but also some random bit, as we don't have the instance id yet
<rick_h> stickupkid:  just uuid them? do we care?
<rick_h> guess that gets to be crazy with double uuids already there lol
<stickupkid> yeah, bingo
<rick_h> hah
<stickupkid> rick_h, gets it
<stickupkid> :)
<stickupkid> it's like crazy uuid town
<rick_h> I can haz uuids! ALL the uuids!
<stickupkid> hahaha
<stickupkid> yeah
<rick_h> stickupkid:  so there's > 1 per machine/unit. so unit tag + int?
 * rick_h runs the boy to school, back in a couple
<stickupkid> rick_h, i'll see if that info readily available, otherwise I've got a nice compromise atm
<rick_h> stickupkid:  awesome ok
<stickupkid> manadart, fan type filtering has landed
<manadart> stickupkid: OK.
<hml> achilleasa: proved that calling dispatch for actions on k8s units is all thatâs needed.  making that work outside of spike will be more interesting.  :-)
<stickupkid> manadart, food for thought? https://github.com/juju/juju/pull/11188#discussion_r381348454
<stickupkid> manadart, updated
<achilleasa> Can I get a CR+QA on https://github.com/juju/juju/pull/11233?
<achilleasa> rick_h: do you think I should backport ^^^ for 2.7.3?
<hml> achilleasa:  looking
<rick_h> achilleasa:  looking
<rick_h> achilleasa:  hmmm...I'm nervous about a change in behavior fallout but guess we're wanting it anyway
<rick_h> achilleasa:  so I've talked myself into +1 to backport
<achilleasa> rick_h: I am not sure if you are ever supposed to use the command that particular way so perhaps noone will notice ;-)
<rick_h> achilleasa:  yea, that's what I got to as well
<manadart> stickupkid: Will look in bit. OK to merge if you are not around?
<stickupkid> to late, kicked off the merge
<stickupkid> :p
<hml> achilleasa:  approved, but have a question about leadership in reliation to this fix, in the pr
<achilleasa> hml: if you still have your test controller up, add another mysql unit and juju run the cmd on that
<hml> achilleasa:  already done.
<hml> achilleasa:  it was more a question of how to tell in the code change.  the description says it, but how do i know in the code?
<achilleasa> ah... well, all application setting reads are guarded via a token checks; the check for reading happens here: https://github.com/juju/juju/blob/develop/apiserver/facades/agent/uniter/uniter.go#L1580-L1600
<hml> achilleasa:  ah, okay.  ty
<achilleasa> writes are guarded via this decorator: https://github.com/juju/juju/blob/develop/state/leadership.go#L50
<achilleasa> hml: can you also sanity check https://github.com/juju/juju/pull/11234 (backport to 2.7). I am also running the QA steps again with this branch on lxd
<hml> achilleasa:  sure
<hml> achilleasa:  looks the same to me - approved
<babbageclunk> thumper: I'm changing github.com/juju/utils/tar - should I extract it to juju/tar?
<anastasiamac> babbageclunk: m not thumper but i think it's woarth it (and follows hat we r trying to do with other juju/utils)... so insignificantly but +1
<babbageclunk> anastasiamac: that counts! I'll need a wallyworld or thumper -alike to make the repo though
<anastasiamac> babbageclunk: :)
<babbageclunk> So I'll do it in-place for now
<rick_h> babbageclunk:  https://github.com/juju/tar
<babbageclunk> rick_h: oh awesome, thanks wallyworld- and thumper-alike! I'd have pinged you but I didn't think you were still around
<thumper> babbageclunk: yep, do it
 * babbageclunk does it
 * thumper needs to think about food...
<thumper> maybe
<babbageclunk> dammit now you've got me thinking about food
<thumper> babbageclunk: I've decided to fast through lunch...
<thumper> perhaps I'll eat later
<thumper> I have many calls this afternoon, and I'm behind on email
<thumper> I ate last night, I'm fine
<anastasiamac> wow stoic
<hpidcock> thumper: I've been doing 18hour fasts everyday... I know them feels
<thumper> hpidcock: working for you?
<hpidcock> thumper: yep, lost some weight, but more importantly, more energy for the day.
<thumper> hpidcock: the stability of the energy is a big part
 * thumper out for physio, bbl
#juju 2020-02-20
<tlm> what do we consider the primary web domain for juju? Need to inject a proper unique domain into kubernetes for the controller
<timClicks> tlm: juju.is will be our home shortly
<tlm> sold, thanks
<hpidcock> tlm: inside k8s we use juju.io
<hpidcock> but I think that is wrong
<tlm> oh
<timClicks> we never actually acquired that domain I think
<hpidcock> yep
<tlm> i am looking for refs in the code
<tlm> we use it only in annotations now
<tlm> should we change to something we own ?
<anastasiamac> babbageclunk: https://github.com/juju/juju-restore/pull/6 has the one char read :)
<babbageclunk> ok looking
<babbageclunk> anastasiamac: neat! but the patches make me sad
<babbageclunk> thumper: (or rick_h if you haven't gone to sleep yet) can you make one for juju/symlink as well? Otherwise juju/tar still depends on juju/utils/symlink
<anastasiamac> babbageclunk: yeah :( i did not like ti eiether but i was not too keen to add yet another func as a property everywhere
<anastasiamac> babbageclunk: anyway changed now.. PTAL ?
<tlm> hey hpidcock hit a snag with ssl certs. Got time for HO ?
<hpidcock> sure
<babbageclunk> anastasiamac: approved!
<anastasiamac> babbageclunk: \o/
<anastasiamac> babbageclunk: wot's worng with 'in a raw mode"?
<babbageclunk> anastasiamac: I know what you mean about passing in more things - it kind of means they end up leaky
<anastasiamac> yeah main command call is growing :(
<babbageclunk> anastasiamac: Well, there's only one raw mode, and the terminal's in it!
<anastasiamac> i can honestly say that English articles r confusing!! :D
<babbageclunk> yeah, that's probably fair
<rick_h> babbageclunk:  https://github.com/juju/symlink
<babbageclunk> yay thanks again rick_h - I think thumper's ignoring me.
<thumper> babbageclunk: not ignoring, just busy
<babbageclunk> thumper: I know, just hassling!
<thumper> babbageclunk: the fileutils part was one of the next things I was wanting to extract
<thumper> particularly the exec handling
<thumper> but it was always just a bit "later"
<thumper> wallyworld: seeing multiple intermittent test failures in ActionSuite.TestFindActionTagsByLegacyId trying to land in develop
<wallyworld> ok
<thumper> also in worker/caasoperator
<thumper> jam: updated https://github.com/juju/juju/pull/11208 to use mark and sweep to indicate initialization step
<thumper> to avoid calculating summaries while initializing
<hpidcock> thumper: wallyworld: my pr fixes that intermittent failure
<hpidcock> which has landed
<tlm> hpidcock wallyworld: still sorting out stuff but start of PR https://github.com/juju/juju/pull/11089
 * tlm will be back at 5
<achilleasa> hpidcock: so is this block copied over from the uniter and not in use atm? (https://github.com/juju/juju/blob/develop/apiserver/facades/agent/caasoperator/operator.go#L187-L211). Are there any plans to use it in the future? I guess I might as well add a leadership check there too to be safe... just need to figure out what AuthOwner() returns when inside the operator facade
<hpidcock> achilleasa: when I looked a few hours ago there is no call to that from the caasoperator worker. It sets up an interface, casts the client to that interface, stores it and that was all. Only the uniter runner context has any mention of calling setpodspec. It looks like its been there for two years and nothing used it. Or uses it anymore. So without going through the history, I can't say for sure. But what I can say is
<hpidcock> that an application should only have one caasoperator worker, so the leadership check is unnecessary.
<achilleasa> hpidcock: that was my thought exactly which is why I am currently bypassing the check there (and in the migration path)
<stickupkid> manadart, quick ho around openstack
<stickupkid> ?
<manadart> stickupkid: OMW
<achilleasa> jam: when we introduce new hook types do we need to also open PRs against charm tools (e.g. to populate the symlinks when building the charm?)
<nammn_de> manadart: later time for a ho about move-to-space and fan networking?
<manadart> nammn_de: Sure. Let's do it after stand-up. I have to relocate home before then.
<nammn_de> manadart: rgr
<stickupkid> achilleasa, i'm seriously thinking of adding multiple errors to pkg/err
<stickupkid> achilleasa, i'm sure we roll our own everytime
<achilleasa> stickupkid: params.ErrorResult is one place
<achilleasa> and I think we do something similar for bundlechanges
<stickupkid> achilleasa, just done a new one in provider/openstack \o/
<achilleasa> can I get a quick CR on this tiny change? https://github.com/juju/charm/pull/304
<zeestrat> Hey, we're hitting something strange with where deploying cs:ubuntu-14 in a bundle fails, but deploying it manually with `juju deploy ubuntu --series xenial` works. Could someone humor me and deploy http://paste.ubuntu.com/p/XFqtd6qc7q/ with Juju 2.7.2 on local LXD before I write up a bug?
<zeestrat> https://www.irccloud.com/pastebin/J43EZZHw/
<stickupkid> zeestrat, this might be related https://github.com/juju/charm-helpers/pull/434#pullrequestreview-361992024
<zeestrat> stickupkid: thanks, looks that way. Still rather confused why it hits in bundle deploys but not CLI deploy. Thanks for the info!
<achilleasa> stickupkid: can you take a look? https://github.com/juju/juju/pull/11237
<achilleasa> stickupkid: added extra context for the uniter change
<stickupkid> achilleasa, happy with that :+1:
<thumper> aah... who changed history on the 2.7 branch?
<thumper> also, how did the history get rewritten on the 2.7 branch
<hml> thumper: when?
<thumper> since Feb 1
<thumper> which was the last time I pulled 2.7
<thumper> https://github.com/juju/juju/commits/2.7?after=9da5b593fa1d096220230143f1e1fea5c4eb56ad+69
<thumper> suspect there
<hml> thumper: rick_h, we needed a backport from achilleasa  for 2.7.2 - there shoudl be 3 commits that got placed after his
<thumper> Jan 23 looks right
<hml> thumper: iâm not sure of the date though.
<thumper> I'm not sure what has happened here
 * thumper sighs
<thumper> looks like the 2.7 branch was renaemd to 2.7-configoption
<hml> thumper: it was around feb 10
<thumper> and and older branch was pushed to 2.7
<thumper> no idea what/why
<thumper> not even sure if the 2.7-configoption is in 2.7
<thumper> because one of my fixes doesn't appear to be in 2.7 any more
<thumper> this needs more digging
<hml> thumper: i donât think that was us?  <crossing fingers>
<thumper> FFS
<thumper> and... those commits aren't in our latest release
<thumper> so we are missing about four bug fixes in the 2.7 branch
<babbageclunk> oh crap
<timClicks_> wallyworld: which release notes template did you used for your 2.7.3 WIP post?
<wallyworld> i copied 2.7.2 and hacked it up
<wallyworld> there is a template somewher ei tihnk
<timClicks_> which would have been copied from 2.7.1
<timClicks_> they're using an old template
<timClicks_> simon's release of 2.7.0 is using the current one
<hpidcock> so turns out AWS instance pricing index.json is somewhat wrong. It shows prices for instance types for regions that don't fully support that type. It just means at least one AZ in that region supports that instance type...
#juju 2020-02-21
<thumper> looking for another review of https://github.com/juju/juju/pull/11208
<hpidcock> thumper: I had a quick glance over it, but might be better for wallyworld to review it more thoroughly
<wallyworld> looking
<wallyworld> thumper: to what extent were you still intending to look at john's unresolved comments? the only other issue is the one hpidcock raised. were you lokking for a +1 or waiting for john?
<wallyworld> hpidcock: here's that show-operation one https://github.com/juju/juju/pull/11238
<thumper> wallyworld: john's main unresolved issue was the initialization case, where we were calculating the summary hash for every model change during initialization
<thumper> this was the addition of the initializing variable and hooking into mark and sweep
<thumper> I was looking for a +1
<thumper> I'll update the controller's mark and sweep to lock
<wallyworld> thumper: there's also hr passing of the address of initialising variable
<wallyworld> that could open t up to a race condition
<thumper> not really
<thumper> or at least the race detector doesn't think so
<thumper> there are memory barriers around the access and udpates
<wallyworld> ok
<wallyworld> thumper: you have your +1
 * thumper goes to make updates for hpidcock
<hpidcock> wallyworld: I added comments on your PR
<wallyworld> ta
<wallyworld> kelvinliu_: your pr fixes bug 1854104 right?
<mup> Bug #1854104: Duplicate file mounts in CaaS charm causes silent failure <k8s> <juju:In Progress by kelvin.liu> <https://launchpad.net/bugs/1854104>
<kelvinliu_> wallyworld: yes
<wallyworld> ta, i'll add link to pr
<kelvinliu_> we were editing the description at the same time, but u committed quicker. lol
<hpidcock> wallyworld: how about this for a kicker, aws returns a `Your requested instance type (c4.large) is not supported in your requested Availability Zone (ap-northeast-1a)` even though the instance type is supported according to their APIs. And to top it off, they also use that error when there isn't any stock available.
<hpidcock> so nothing we can do about that atm
<hpidcock> I've fixed the cases where the AZ straight up doesn't support the instance type
<wallyworld> hpidcock: well how about that, jeez
<wallyworld> kelvinliu_: got time for HO?
<kelvinliu_> yep
<hpidcock> wallyworld: also apne1-az3 (which is our ap-northeast-1a) looks to be either a small AZ or they are decommissioning it. So this could be the source of the problems
<wallyworld> kelvinliu_: google hangouts sound not working, will try reboot
<kelvinliu_> nws
<wallyworld> kelvinliu_: ah wait, i  think it works again
<wallyworld> kelvinliu_: am in HO
<kelvinliu_> me 2
<kelvinliu_> ru in standup?
<hpidcock> wallyworld: I believe I've found the bug. When we retry a StartInstance call, we don't retry on failed AZs. But, there was logic to check if there are no AZs left to try on, and to clear the failed flag for that machine. But we only checked to see if there were ANY AZs that we could retry on. So if the constraints filtered out at least one AZ, we would never clear the failed flags.
<hpidcock> So when filtering to one AZ, it would only ever try once to start the instance.
<wallyworld> hpidcock: interesting, i could have sworn we had a special error type to deal with az failures
<anastasiamac> hpidcock: also for the instance types that are no longer suported by cloud providers, they can be marked as deprecated.. i believe that will elminiate them from the running...
<hpidcock> anastasiamac: problem is this specific AZ, AWS APIs report them as being supported. And they are not deprecated yet (still considered current generation according to AWS)
<anastasiamac> ah
<wallyworld> hpidcock: ty for review also, i cargo culted that "interesting" timer code from actions. i'll refactor all the instances
<wallyworld> hpidcock: so is the plan to rework the retry logic on AZ failure?
<hpidcock> no it's a tiny change, the retry logic just needed to handle constraints
<hpidcock> it handles constraints when selecting an az, not when retrying the AZs the constraints match
<hpidcock> I'll throw a PR up soon
<wallyworld> gr8 ok
<hpidcock> can probably wait till monday https://github.com/juju/juju/pull/11239 happy to change anything
<wallyworld> hpidcock: looking
<wallyworld> hpidcock: should github.com/hpidcock/aws-fetch-instance-types live alongside process_cost_data.go in ec2/internal/instancetypes
<hpidcock> wallyworld: I didn't want to introduce aws-sdk
<hpidcock> but I'm happy to
<wallyworld> ah i see
<wallyworld> we have the k8s api, azure sdk etc
<wallyworld> one more may not hurt
<wallyworld> lol "17,114 additions, 3,649 deletions not shown because the diff is too large. Please use a local Git client to view these changes."
<hpidcock> just a few changes
<wallyworld> hpidcock: btw, the reason the wait duration was a string was to allow the user to leave off the duration and have it default to seconds
<wallyworld> i don't like it but it's a compatibility change :-( could stick it behnd a the feature flag i guess
<wallyworld> all code that's 5 years old
<hpidcock> --wait 1
<hpidcock> 1 what lol
<wallyworld> yeah, i know, i know
<wallyworld> it will be  abit messy but i can do show-operation the right way and hack up the legacy show-action-output to use the feature flag
<wallyworld> since show-task and show-action-output are the same command
<wallyworld> but with a different name for jujuv3 feature flag vs legacy
<wallyworld> and show-operation was copied across from show-action-output
<matt_kosut> icey: hi I was proposing a small improvement on ansible apply_playbook in charm-helpers about how arguments are passed. Wondering if it seems worth dicussing https://github.com/juju/charm-helpers/pull/425
<icey> matt_kosut: I've never even looked at the ansible bits of charmhelpers :-P
<matt_kosut> icey: github kinda suggested you as reviewer, should I look for someone else in there? :D
<icey> matt_kosut: one of the existing contributors, maybe? I can review the code but I don't have any context on the change
<matt_kosut> tinwood: back to you again :-) you have been the last one reviewing ansible part of charm-helpers past few years, maybe you could have more context to have a look on my PR? https://github.com/juju/charm-helpers/pull/425
<zeestrat> ^ and a couple more Ansible PRs: https://github.com/juju/charm-helpers/pull/403, https://github.com/juju/charm-helpers/pull/404
<tinwood> matt_kosut, zeestrat, I'm not really an ansible person ... i.e. I don't whether these things are 'correct' or not. However, I'm happy to merge if you can review each other's patches.
<manadart> stickupkid, achilleasa: Need a review of https://github.com/juju/juju/pull/11241
<matt_kosut> tinwood: sounds reasonable too, I used it quite lot past few weeks :)
<zeestrat> @tinwood I understand. Just for the record, matt_kosut and me are colleagues so are slightly biased, but I have looked over his and it looks good. I suggested adding a docstring to clarify, should be good to go after that.
<stickupkid> manadart, looking
<stickupkid> manadart, done
<manadart> stickupkid: Ta.
<stickupkid> manadart, the test code for goose doesn't obviously fill in the device id :|
<stickupkid> manadart, makes it difficult to test
<manadart> â_â
<zeestrat> tinwood: Things are reviewed and ready to go when you are.
<tinwood> zeestrat, matt_kosut please could you bring those branches up to date with the current head of master; I can do it, but then you'd need to pull the changes if you wanted to make any more mods to the branches (e.g. if the merge broke a test for example).
<zeestrat> tinwood: should be up to date now.
<matt_kosut> tinwood: same on my side, good to go. :-)
<tinwood> zeestrat, matt_kosut just working my way through them; they take a while as each time one is merged, the branch needs to be brought up to date and then the tests have to run ...  I'll get back to you! :)
<zeestrat> tinwood: ty! I fixed conflict in https://github.com/juju/charm-helpers/pull/403.
<tinwood> zeestrat, kk - I'll keep an eye on them. I've just merged https://github.com/juju/charm-helpers/pull/425 so it may happen again?
<zeestrat> tinwood: Looks good now.
<tinwood> zeestrat, matt_kosut that should be all three merged.  Thanks for your work on them!
<zeestrat> tinwood: nice. Thank you very much and have a nice weekend :)
<tinwood> you too!
<matt_kosut> thanks!
<hml> achilleasa:  approved 11237
<achilleasa> hml: tyvm
<stickupkid> manadart, https://github.com/go-goose/goose/pull/79
<manadart> stickupkid: Approved it.
<stickupkid> ta much
<stickupkid> manadart https://github.com/juju/juju/pull/11235
<manadart> stickupkid: Swap you. https://github.com/juju/juju/pull/11243
<skay> how do I configure a file via the command line?
<skay> I mean, a config setting
<skay> my yaml file uses include-file
<skay> a ha! got help from a friend. juju config app value=@/path/to/file
<hml> region pr review: https://github.com/juju/juju/pull/11242
<stickupkid> manadart, done
<manadart> stickupkid: Ta.
<hml> stickupkid:  ty
<achilleasa> hml: or stickupkid any recommendations for playing with windows workloads?
<hml> achilleasa:  never done it.  :-D
<achilleasa> hml: hmmm... maybe windows vm and manual?
<achilleasa> I want to try the reboot check
<hml> achilleasa:  thereâs a noop charm in the store?
<hml> achilleasa: https://jaas.ai/u/ionutbalutoiu/noop/7
<hml> but might not have enough for your purposes
<achilleasa> all a need is a unit agent
<achilleasa> s/a/I/
<hml> that should get ya one
<achilleasa> awesome thanks
