#juju 2012-02-13
<_mup_> juju/trunk r454 committed by kapil.thangavelu@canonical.com
<_mup_> merge local-respect-series, local provider now respects environment series [r=bcsaller,jimbaker][f=914392]
<jcastro> hazmat: huh weird, I should have double checked that proposal before sending it to you
<jcastro> I'll redo it
<bac> When running against a local environment, 'juju ssh' requests the ubuntu user's password as the authorized_keys haven't been copied over.  This doesn't occur when running against ec2.  Any ideas?
<bac> hazmat, SpamapS: i'm having trouble with juju on lxc not setting up the ubuntu user properly.  it looks like the ubuntu users already exists when 'setup_users' is called, so it returns without populating the .ssh directory.  is this a known problem?
<m_3_> SpamapS: what time is the daily juju package built?  looking at the recipes, but it's just "daily"... but they don't seem to be built at midnight UTC or midnight Boston
<SpamapS> m_3_: you'd have to ask the launchpad guys .. I believe it is guaranteed to build at least once per day that has a delta though.
<m_3_> (trying to sync up testing updates)
<SpamapS> m_3_: you probably just have to poll the ppa to see when the binaries are updated
<m_3_> well I guess it's ok for the tests to be 24hrs behind
<m_3_> oh
<m_3_> hmmmm... ok
<m_3_> thanks
<bac> SpamapS: hey clint did you see my question above?
<m_3_> bac: haven't seen that problem, but the lxc charm tests aren't run on the latest juju
<m_3_> bac:  all was working fine wrt jujussh/lxc as of version... (looking)
<bac> m_3_: interesting.  btw, i've tried today with the PPA version and the precise distro version.
<m_3_> bzr451
<bac> m_3_: someone appears to be unexpectedly setting up the ubuntu user before setup_users is called.  perhaps it needs to check for the existence of ~ubuntu/.ssh before deciding to bail.
<bac> m_3_: thanks
<m_3_> bac: I'll try to reproduce locally in a bit... usually lxc problems turn out to be network-related
<m_3_> pls verify your `virsh net-list --all` shows default active, and that `ip addr show` shjows virbr0 using 192.168.122.1, dnsmasq has 192.168.122.0/24, lxc guest can resolve names, etc
<hazmat> bac, why would the ubuntu user existing in the container beforehand?
<hazmat> its just a debootstrap
<SpamapS> m_3_: are you running your tests on precise btw? we should set that up so we can detect when the OS breaks juju
<SpamapS> m_3_: may be harder than it sounds though.. as spawning a different AMI in a juju cluster is, I think, fairly difficult right now
<m_3_> SpamapS: that prompted the line of questioning above
<m_3_> SpamapS: sort of set up the charm expecting to run a set of testing services _per_ series
<m_3_> unfortunately that means an environment per series right now
<m_3_> but they'll be rolling up to '<series> charms' tabs in jenkins.qa.ubuntu.com
<m_3_> so it shouldn't be a problem
<m_3_> bac: oneiric checks out fine... precise should work tomorrow morning
<m_3_> ^^ (wrt juju lxc)
<SpamapS> m_3_: what I really want is for a precise slave to join to the oneiric master testing box.
<m_3_> SpamapS: gotcha
<SpamapS> m_3_: to do that one would have to reach into zk and tweak the series or image id
<SpamapS> might even have to restart the provisioning agent.. I haven't looked close enough
<m_3_> SpamapS: opinion... is it better to have oneiric-{lxc,ec2}-charm-bitlbee or just have all providers contribute to an overall oneiric-charm-bitlbee test status?
<SpamapS> we really need a juju setenv or something
<SpamapS> m_3_: I like having more separate tests, because it becomes more scalable
<m_3_> SpamapS: since we're rolling up to a common external dashboard anyways... not sure the point of spending too much time trying to get oneiric n precise talking
<m_3_> the build publisher plugin can push results up to the mother ship (jenkins.qa.ubuntu.com) from separate envs
<m_3_> each env might have master and slaves... but all for a single series
<m_3_> kicking off tests doesn't have to be centralized.. especially if we're gonna base it on lp changeset info
 * m_3_ really wished we had instance-type per service
<bac> m_3_: i just returned and am reading your previous comments.  thanks for following up.
<bac> hazmat: i don't know why the ubuntu user already exists before getting to setup_users.  just reporting what i discovered.
<m_3_> bac: np
#juju 2012-02-14
<michael_tn> good day all :-)
<_mup_> juju/purge-queued-hooks r459 committed by jim.baker@canonical.com
<_mup_> Handle merging events in case of purge
<jimbaker> still need to clean that up, but the logic seems sound. dinner!
<_mup_> juju/purge-queued-hooks r460 committed by jim.baker@canonical.com
<_mup_> Cleanup
<_mup_> juju/purge-queued-hooks r461 committed by jim.baker@canonical.com
<_mup_> Merged trunk
<niemeyer> gary_poster: ping
<gary_poster> niemeyer, on call, will pong in 10 or so
<niemeyer> gary_poster: No worries.. it was mostly to warn you about this:
<niemeyer> ----- lp:~yellow/charms/oneiric/buildbot-slave/trunk
<niemeyer> error: charm publishing previously failed: symlink "hooks/helpers.py" links out of charm: "../../buildbot-master/hooks/helpers.py"
<niemeyer> gary_poster: This works out of a bug only
<gary_poster> niemeyer, yeah, we are aware, thanks.  We will fix before we declare it "done"
<niemeyer> gary_poster: Super
<jcastro> SpamapS: FYI the webminar is 45 minutes, and I'll heavily mention that people should watch the first one before they do this one, so we can just jump right into the meat.
<frankban> hi everybody, actually I am not able to bootstrap juju using ec2 environment. I've found http://pastebin.ubuntu.com/841728/ on the zookeeper instance. It seems that the problem can be us-east-1.ec2.archive.ubuntu.com returning a 403 error. Can you confirm my idea?
<m_3> frankban: ec2 / us-east-1 is bootstrapping fine on this morning's oneiric packages (bzr454)... perhaps the problem was transient?
<frankban> m_3: problem solved, thank you
<m_3> np
<hallyn> just curious, has anyone worked on a charm for diaspora pods?
<hallyn> also, curious whether anyone is looking at bug 930430
<_mup_> Bug #930430: lxc-ls requires root access after deploying an LXC instance <juju (Ubuntu):Confirmed> <lxc (Ubuntu):Confirmed> < https://launchpad.net/bugs/930430 >
<m_3> hallyn: diaspora's on the list Bug 803538
<_mup_> Bug #803538: Charm Needed: Diaspora <hot> <Juju Charms Collection:New> < https://launchpad.net/bugs/803538 >
<m_3> it's a rocking one to get working though... great story for how individuals can put juju to good use... juju's not just for big stacks... etc etc
<_mup_> Bug #932269 was filed: Juju should allow "service-destroyed" hooks <juju:New> < https://launchpad.net/bugs/932269 >
<hallyn> m_3: aweseome
<hallyn> i don't trust other people's pods :)
<m_3> hallyn: :)
<bac> hi m_3, i am still having the LXC+SSH problems i mentioned yesterday but today am using a PPA built on r454.  any ideas on how to diagnose this problem?
<m_3> bac: hey... hmmm
<m_3> so let's back up a sec... what series are you running on your host machine?
<bac> precise
<m_3> ok, have you disabled lxcbr in /etc/default/lxc?
<bac> nope.  first i've heard of that
<m_3> ok, just want to make sure we don't have any conflicts... want only one dnsmasq running, bound to 192.168.122.1, on interface virbr0
<bac> m_3, the problem i'm seeing is that the .ssh directory for the ubuntu user is not being created
<m_3> bac: right... I'm backing up further... never seen that problem and suspect it's something more basic
<bac> m_3, i *can* ssh to the unit, but my authorized_keys are not there, so it prompts me for a password
<m_3> what address is it getting?
<bac> m_3, see the end of my master-customize.log file at http://pastebin.ubuntu.com/842188/
<bac> virbr0 is 192.168.122.1
<m_3> the lxc instance is picking up a 122 address?
<bac> yeah, 122.113
<m_3> so the next thing to check out would be your environments.yaml file
<m_3> make sure you've got a `juju-origin: ppa` line as part of your local environment (whatever you named it)
<bac> m_3, so i should set USE_LXC_BRIDGE="false" ?
<m_3> bac: I would, yes
<bac> yes, environments.yaml is good.  this set up was working last week.
<m_3> then later when you restart lxc (might have to reboot) that interface and corresponding dnsmasq will go away
<m_3> is the juju-origin in there?
<m_3> (don't remember when that particular problem arose)
<bac> yes, juju-origin: ppa
<m_3> cool
<m_3> ok, when was the last time you flushed your /var/cache/lxc?
<m_3> BTW, does it have a 'precise' entry in there or is it still oneiric?
<bac> yesterday, in response to this problem cropping up
<bac> oneiric
<m_3> bac: ok, so this is smelling a bit like a problem we'd be haunted with in lxc before
<bac> m_3, just curious, did you look at the paste?
<m_3> lxc instances sometimes couldn't mount filesystems (/proc is what I'm looking at here)
<m_3> yeah, basing this guess on the paste
<bac> ok
<m_3> two possible things to try
<m_3> 1.) wipe the cache again and see what happens
<m_3> 2.) write your key into root's authrorized_keys in the container
<m_3> you should be able to write directly to /var/lib/lxc/.../rootfs/root/.ssh/authorized_keys
<bac> oh, great.  didn't know that
<m_3> I _think_ the running instance should pick that up
<m_3> once you're in we can debug further
<bac> fwiw i've rebooted and the other dnsmasq is gone
<m_3> cool
<m_3> oh, if you've destroyed your environment, you might wanna flush the cache
<bac> m_w you do find it odd that ~ubuntu exists?
<bac> will do
<bac> i mean *before* setup_users is called
<m_3> don't know... it's copying a cached image right?
<m_3> it might be set up in the image (I totally haven't dug into when/how this gets created)
<bac> dunno
<m_3> so a similar problem surfaced every few weeks with the instances not being able to mount devpts and so we couldn't get into the machine
<bac> m_3: should i change default-series to precise?  does that work now?
<m_3> it's on my list to test today... haven't yet
<m_3> I'd leave it at oneiric atm
<bac> ok, i've changed to precise for the container, blew away /var/cache/lxc, and am trying.  this should take a while
<bac> oops
<m_3> ah, precise is fine
<m_3> just multiple moving parts
<bac> m_3, btw, i used the juju recipe to build a precise PPA into my ppa.  it had failed two days ago due to a networking glitch but worked today.  perhaps you should request a build into the ~juju ppa now.
<m_3> bac: ha
<m_3> that's what I was just going to ask you... how'd you add the ppa on precise
<m_3> that's going to be a problem if we use default-series set to precise with juju-origin set to ppa
<m_3> perhaps it'd be best to use oneiric/ppa to test
<bac> m_3, ok i will.  unless you think it is hopeless i'd like to let this one continue coming up
<m_3> sure... might give us some more info
<m_3> but what'll happen is your local juju-454 will spin up precise lxc instances that get juju-447 (which is hard-coded to only use oneiric)
<m_3> so we should expect it to at least be confused :)
<_mup_> juju/purge-queued-hooks r462 committed by jim.baker@canonical.com
<_mup_> Docstrings/comments/PEP8/PyFlakes
<bac> m_3, just a heads up that i'm still here and waiting on the lxc instance still
<bac> m_3 and now it is up.  the problem is the same.
<m_3> bac: ok, so manually inject your key and poke around
<bac> m_3, well i am logged in via username/password.  is that sufficient?
<bac> m_3, you mentioned the account possibly existing due to a cached copy.  where would that have come from given i blew away /var/cache/lxc?
<m_3> bac: who are you logged in as?
<bac> ubuntu
<m_3> how'd you set a password?
<bac> it is preset as ubuntu
<m_3> ah, sorry
<m_3> does the output of `mount` look normal?
<bac> i'll paste it into a pm
<m_3> looking for proc, devpts, etc
<m_3> bac: catching up with you on a test box
<m_3> bac: so can you deploy a service?
<bac> yes
<bac> but i cannot debug it
<jamesmitchell> while I am working on a charm, can I do something to push the update out to a running juju environment? At the moment I am doing a 'destroy-environment', then 'bootstrap' and 'deploy' again.
<bac> jamesmitchell: did you try upgrade-charm?
<m_3> so on oneiric, I can't 'juju ssh 0'... I get Permission denied (publickey)
<m_3> but once a service is up, I can 'juju ssh bitlbee/0' no problem
<m_3> bac: so you're trying to get into a deployed service once it's up and running right?
<bac> m_3: yes
<bac> m_3: but this is what kills me:
<bac> + setup_users
<bac> + id ubuntu
<bac> uid=1000(ubuntu) gid=1000(ubuntu) groups=1000(ubuntu),999(admin),105(libvirtd)
<bac> + '[' 0 == 0 ']'
<bac> + return 0
<bac> we don't know where that user got created
<jamesmitchell> thanks. upgrade-charm is going to bump the revision as well?
<bac> jamesmitchell: yes
<m_3> bac: yeah, I'm getting the opposite...
<m_3> E: Sub-process /usr/bin/dpkg returned an error code (1)
<m_3> + setup_users
<m_3> + id ubuntu
<m_3> id: ubuntu: No such user
<m_3> + '[' 1 == 0 ']'
<m_3> + adduser ubuntu --disabled-password --shell /bin/bash --gecos ''
<m_3> bac: ok, so the base image or lxc templating
 * m_3 digging thru /var/{lib,cache}/lxc
<m_3> bac: what's `grep ubuntu /var/cache/lxc/precise/rootfs-amd64/etc/passwd`
<m_3> bac: and /var/lib/lxc/bac-local-0-template/config show?
<bac> m_3, odd -- i specified precise in environments.yaml but it created an oneiric cache.
<m_3> grrrr
<m_3> 454?
<bac> ii  juju                   0.5+bzr454-1juju2~prec next generation service orchestration system
<bac> m_3, not ubuntu in cached /etc/passwd
<m_3> there were several places in the code where args were defaulted to oneiric... perhaps one was missed
<m_3> bac: dunno man... I'm thinking the short story is... still broken
<bac> boo
<m_3> I triggered a ppa build
<bac> oh, good
<m_3> and I bugged clint to update the archive... although that might wait until we've sorted this out
<m_3> oh, BTW, while its' up... `dpkg -l | grep juju` on the instance
<m_3> 447?
<m_3> or 453
<bac> ii  juju                       0.5+bzr454-1juju2~oneiric1 next generation service orchestration system
<m_3> wow!
<m_3> oh, nevermind
<m_3> it's oneiric
<m_3> the precise ppa build just failed again
<m_3> I'll test off of head and see where I get... I think I can specify something like 'juju-origin: lp:juju'
<m_3> this is in the way of testing charms on precise so we'll need it worked out sooner rather than later
<m_3> bac: thanks... sorry we didn't get anywhere.  I'd recommend oneiric running oneiric lxc if you just need to get stuff done atm
<bac> m_3: thanks for your help
<bac> sadly i don't have an oneiric machine around
<m_3> lxc works inside of an openstack or ec2 instance just fine
<bac> oh, right
<m_3> works inside of a libvirt VM too... just watch network conflicts
<m_3> I use ec2/large so I can put /var/lib/lxc on a decent-sized tmpfs
<m_3> ok, precise ppa build is good now
 * m_3 going to wait in line at the florist
#juju 2012-02-15
<_mup_> juju/purge-queued-hooks r463 committed by jim.baker@canonical.com
<_mup_> Locking test for UnitRelationLifecycle.purge
<_mup_> juju/deploy-invalid-conf r449 committed by kapil.thangavelu@canonical.com
<_mup_> merge trunk
<jcastro> niemeyer: it isn't clear to me in the CS spec, but is there a flag for pulling unofficial charms from the store?
<niemeyer> jcastro: No flags, it's just a different url
<niemeyer> jcastro: cs:~jcastro/oneiric/hadoop
<jcastro> ah perfect, thanks.
<niemeyer> jcastro: np
 * jcastro expects some cs:~upstream-project-that-isnt-in-the-repository/oneiric/projects
<jcastro> good to know we covered that use case
<bac> hi m_3
<bac> hazmat, m_3:  i finally found the root of the problem i've been trying to resolve with the ubuntu user not being setup properly in the container.  the lxc update from last friday (https://launchpad.net/ubuntu/+source/lxc/0.7.5-3ubuntu24) includes a change to the ubuntu template to create the ubuntu user.  this causes setup_users to skip populating /home/ubuntu/.ssh.  i have a patch that makes setup_users more robust:  http://pastebin.ubuntu.com/843134/
<m_3> bac: cool... thanks!
<hazmat> bac thanks for looking into it, the patch lgtm..  bcsaller, jimbaker ^
<hazmat> i'll apply it as a trivial
<bac> great, thanks hazmat
<bcsaller> hazmat: yeah, looks fine
<jimbaker>  cool
<_mup_> juju/trunk r455 committed by kapil.thangavelu@canonical.com
<_mup_> merge deploy-invalid-conf. services won't deploy without valid config. [r=jimbaker, bcsaller][f=914610]
<hazmat> bac the first bit looks a little strange, its basically saying if the user exists, then add the user
<bac> hazmat: not my intent.  yeah, that part is bogus
<bac> hazmat: the condition is backward
<_mup_> juju/trunk r456 committed by kapil.thangavelu@canonical.com
<_mup_> [trivial] lxc creation script copes with existing ubuntu user [a=bac][r=hazmat,bcsaller,jimbaker]
<gary_poster> hazmat, fwiw, gmb just sent the feedback email I mentioned earlier to the juju list.
<gary_poster> Also, m_3 or SpamapS or other ~charmers, we just submitted our charms for review https://bugs.launchpad.net/charms/+bug/932974 and eagerly await feedback.  Thanks!
<_mup_> Bug #932974: Buildbot charms <Juju Charms Collection:Fix Committed> < https://launchpad.net/bugs/932974 >
<gary_poster> thanks mup, like I said. :-)
<_mup_> juju/unit-stop r424 committed by kapil.thangavelu@canonical.com
<_mup_> minor tweaks
<jimbaker> hazmat, are you working on bug 872264 ?
<_mup_> Bug #872264: stop hook does not fire when units removed from service <juju:In Progress by jimbaker> < https://launchpad.net/bugs/872264 >
<jimbaker> ok, i see this is a doc branch in lp:~hazmat/juju/unit-stop
<hazmat> jimbaker, i'd had a discussion with niemeyer about when i was writing that spec, its going to need some state modifications
<hazmat> gary_poster, noted, thanks.
<hazmat> gmb, relation-get  can also retrieve local (non remote unit) values
<jimbaker> hazmat, it sounds like it may be too earlier then to start dev on then
<niemeyer> hazmat: Maybe even implement the ideas we discussed at Budapest
<hazmat> jimbaker, have you thought about a solution ?
<jimbaker> like to see some consensus before i tackle it
<niemeyer> But we need a proper spec first
<hazmat> agreed
<jimbaker> hazmat, no, just trying to work on the most important bugs first. i will unassign myself
<hazmat> niemeyer, the solution i got to was recording intent in the topology, which felt a little odd, and creates a garbage collection activity
<niemeyer> hazmat: Ugh
<hazmat> niemeyer, we record user intent in the topology, the coordination around stop happens, and then.. the topology needs cleanup
<niemeyer> hazmat: Well.. that sounds ok..
<hazmat> niemeyer, if we modify the topology to directly enact the user action, we effectively remove the identity.. if we record elsewhere we're violating the encapsulation of user intent in the topology.
<niemeyer> hazmat: We just need to think through since logic has to be aware of the fact there is now ghost information int he topology
<hazmat> niemeyer, yeah.. it seemed like the best option
<hazmat> even if we record elsewhere the topology needs cleanup
<niemeyer> hazmat: RIght.. I recall that conversation now.. tear down follows the inverse process of setup to preserve the grand concept
<niemeyer> hazmat: Maybe.. but I wouldn't classify that as garbage collection
<hazmat> ghost recon ;-)
<gary_poster> hazmat, relation-get: cool, we would love to know how.  We don't see it in the docs even after searches, but I would not be at all surprised to have you show it to us in there. :-)  response to the email would be best: those questions/comments are from all of us collectively in the squad
<hazmat> gary_poster, relation-get -h ?
<hazmat> fair enough
<niemeyer> hazmat: The key distinction is that it's not unimportant information.. it's there for a reason, and the way it is disposed off must be well defined and considered
<gary_poster> hazmat, -h gives me an idea on how to do it, at least.  (`relation-get key $JUJU_UNIT_NAME` afaict?)
<gary_poster> non-obvious imo :-)
<hazmat> gary_poster, fair enough, it needs  documentation
<hazmat> and it looks like it should be a switch so a key doesn't need to be specified
<gary_poster> yeah +1
<SpamapS> Hey guys, any code going to land in the next 8 hours that I should hold off uploading to precise for feature freeze?
 * SpamapS re-posts in #juju-dev as well
<m_3> SpamapS: we really should test the precise lxc stuff before uploading
<SpamapS> m_3: did something change?
<m_3> SpamapS: yup... this morning... just kicked off another ppa build to test it out
<m_3> 456 should go in if it tests out
 * m_3 looking to see what 455 is as that'll get pushed too
<m_3> yeah, 454 and 456 are critical to precise
<SpamapS> m_3: bug #'s?
<m_3> both 454 and 456 relate to Bug #914392
<_mup_> Bug #914392: LXC local provider does not respect 'series' (only installs oneiric) <local> <juju:Fix Released by hazmat> <juju (Ubuntu):Triaged by clint-fewbar> < https://launchpad.net/bugs/914392 >
<hazmat> SpamapS, should be good afaics
<m_3> ok, new ppa is out... please test precise running precise lxc containers
<m_3> bac: ^^
<m_3> I'll do the same
<bac> cool, thanks m_3.  i had trouble getting mine to build due to test failures.
<m_3> for peeps who haven't done that yet, you've gotta edit your environment to change 'default-series: precise' and make sure 'juju-origin: ppa'
<bac> m_3: have you been able to install 0.5+bzr456-1juju2~precise1 ?
<dendrobates> I haven;t tried in precise yet
<m_3> bac: it's still downloading stuff
<bac> m_3: it appears on the lp page but i don't know if it is published yet
<m_3> bac: crap, yeah... it's still showing 454... we'll have to wait on lp
<m_3> bummer... ok, the build failed on precise
<danwee> anybody can help with juju
<m_3> hazmat: the failing test doesn't look to have anything to do with the code changes
<m_3> https://launchpadlibrarian.net/92996257/buildlog_ubuntu-precise-i386.juju_0.5%2Bbzr456-1juju2~precise1_FAILEDTOBUILD.txt.gz
 * hazmat pokes
<hazmat> m_3, nothing obvious
<hazmat> danwee, sure
<danwee> at last , thanks hamzat
<danwee> ok , i m trying to cennect juju to a zookeeper, but i keep getting invalid ssh key
<danwee> that is on orchestra server
<danwee> i did RSA exchange keys with no luck to solve the problem, when i juju status
<danwee> any ideas?
<danwee> https://help.ubuntu.com/community/UbuntuCloudInfrastructure
<danwee> reached juju status step, >> invalid ssh key
<danwee> are u still there ?
<hazmat> danwee, so you did bootstrap, the machine comes up, but status can't connect because of an invalid ssh key
<hazmat> danwee, do you have a key specified in environments.yaml or is it doing the default and picking it up from the host/user that the cli is running on
<danwee> mmm no i didnt specify  a key on yaml, u think that will solve the problem ?
<danwee> u mean i can specify an RSA key in yaml ?
<danwee> hazmat can u explain more please
<hazmat> danwee, yes.. you can specify the key, else the key is sniffed from the environment.. if its sniffed from the environment, its important that the same user executes juju subsequently as its their key thats being setup on the newly launched machine
<hazmat> danwee, the bottom of this has some details on specifying keys explicitly instead of implicitly https://juju.ubuntu.com/docs/provider-configuration-ec2.html
<hazmat> hopefully that helps
<danwee> you are great :)
<jianghaitao> hi, just joined. Anyone who can help me on running LXC unit tests?
<jianghaitao> it is complaining about resolvconf package
<m_3> jianghaitao: hi... first make sure you're set up like https://juju.ubuntu.com/CharmSchool
<m_3> jianghaitao: with one addition... your environments.yaml should have 'juju-origin: ppa'
 * SpamapS is starting to think more and more that juju should push itself into the bootstrap node and deploy itself from the bootstrap node rather than trying to direct users to the right place to fetch it
<jianghaitao> well, I downloaded the juju source and would like to run the LXC unit tests
<jianghaitao> but it did not work, just wondering why?
<koolhead17|afk> jianghaitao: http://askubuntu.com/questions/65359/how-do-i-configure-juju-for-local-usage/65360#65360
<_mup_> Bug #65360: [UNMETDEPS] libbonobouimm1.3 has unmet dependencies <libbonobouimm1.3 (Ubuntu):Fix Released by geser> < https://launchpad.net/bugs/65360 >
<koolhead17|afk> did you checked this link
<SpamapS> actually jianghaitao brings up an important point.. I don't believe we run those unit tests in an automated fashion *ever*
<SpamapS> and further.. I don't know why
<jianghaitao> Thanks, but that is not what I was asking
<jianghaitao> I knew how to run juju locally
<SpamapS> well except that buildd's run lucid which has a very old lxc in the kernel.
<SpamapS> jianghaitao: we don't run those tests very often.. we probably should.
<jianghaitao> just that I want to run the unit tests that comes with the source code...seems I did not state clearly from the beginning?
<m_3> jianghaitao: the lxc tests require a lot of lxc local environment setup
<SpamapS> jianghaitao: those specific tests are usually skipped because they require extra functionality on your system
<m_3> jianghaitao: lxc setup assumes things like apt-cacher-ng bound to 192.168.122.1:3124(?), etc
<jianghaitao> That is ok, I just want to know how to set it up so I can run it
<m_3> jianghaitao: so I recommend that before you try any lxc unit tests, you get your lxc env set up
<m_3> jianghaitao: cool... so that's the link I sent above juju.ubuntu.com/CharmSchool
<m_3> spells out the packages you'll need and the basic environment you'll need
<jianghaitao> how do I know my lxc is set up? I think I set it up already
<jianghaitao> Do I need to install juju before running unit tests? probably not, right...I will double check again
<m_3> jianghaitao: it's set up when a service (choose a charm) reaches a 'started' state
<m_3> jianghaitao: yes, you'll need to install everything on that list to get lxc working
<jianghaitao> humm....actually I had more problem running juju unit tests on a VM that I have set up the juju and was able to start services...but anyhow, I appreciate all your help and will work on it more
<m_3> jianghaitao: cool
<danwee> i specified the key in enviroment.yaml, but still invalid ssh key >> hamzat , any ideas ?
<danwee> something is missing, but i think this the right path
<hazmat> danwee, hmm.. there's a few variables to play out, the machine appears to be up per orchestra and that the ssh connection attempt is happening at all. debugging further would require getting into the machine to understand why the key hasn't been put in place correctly. we put the key in place via cloud-init
<hazmat> m_3, SpamapS is there any good way to get debugging info out of orchestra, like what the KSMETA is for a machine
 * m_3 clueless
<danwee> hamzat, from the machine side, all there is the authorized_keys, should i install cloud-init on the machine inorder for juju to autheticate it somehow ?
<m_3> danwee: yes, that should be part of the orchestra preseed iirc
<SpamapS> hazmat: cobbler has a commandline tool
<SpamapS> hazmat: you can use dumpvars to get all of the variables for a system
<hazmat> danwee, it should be part of the base clolud image
<hazmat> cloud init that is
<hazmat> jianghaitao, could you verify if the libvirt bridge is running it should be virbr0 in ifconfig output
<danwee> yes, i m viewing the kickstart for the machine
<danwee> thanks alot for your help , i m gonna delve more into the subject, but thanks for pointing me out the directions
<danwee> you take care and i find out something interesting, i ll share it as well
<hazmat> danwee, if you have the kickstart metadata for the machine could you post it to pastebin.ubuntu.com and send the link here.
<hazmat> so cloud-init isn't putting the keys on the machine, either the base image is off somehow (ie. cloud-init not installed), or cloud-init's data is misconfigured
<jamesmitchell> can I get a pointer for attaching and mounting and EBS volume for a service? I am deploying an nfs charm and want to put the data on a seperate ebs volume
<m_3> jamesmitchell: the default juju images are ebs-rooted
<m_3> jamesmitchell: unfortunately, that's all we do now to handle data
<m_3> jamesmitchell: at least you can snapshot that, but otherwise it's manual attachment/mounting
<jamesmitchell> I'm fine with it being ebs-rooted. I want to add an extra ebs volume as part of the install
<jamesmitchell> and I'm not sure what I can do inside the instance to attach and mount
<m_3> jamesmitchell: you can ssh in and do anything you'd like
<jamesmitchell> :)
<m_3> jamesmitchell: juju ssh takes args and supports forwarding iirc... and 'juju scp' exists
<jamesmitchell> I was thinking more automated juju install
<m_3> right ;)
<jamesmitchell> everything I google wants me to attach the volume using console or ec2-tools
<jamesmitchell> so I ask whether there is a way for the instance to attach the volume itself - provided I send a volume id as a config value
<m_3> hmmmm... dunno, I've always done it using ec2 tools from _outside_ the instance
<m_3> excellent question though... I don't think there's anything preventing you from sending creds up to the instance and using ec2 tools from there
<jamesmitchell> fair enough - I'll go that route
<m_3> it's stupid, I know, but one way to do exactly what you're trying is two different nfs services
<m_3> just make the first one do nightly transfers from it's shared out directory to the backup mount
<m_3> they're both running ebs volumes
<m_3> but really you don't gain anything over just running a single ebs-rooted nfs server and taking snapshots of that ebs volume
<m_3> jamesmitchell: one thing I want to do is provide a 'backup to s3' charm
<m_3> that'll be subordinate to something like nfs server
<jamesmitchell> actually my use case is for the setup of a new site. I have 2Gb tar file of initial image data (paired with an initial db load)
<jamesmitchell> the tar file is on S3, and I can currently copy it down and untar
<m_3> ah, that's it then
<jamesmitchell> but it seems really slow - so I thought about creating the ebs ahead of time, and using snapshots to preload other sites
<m_3> you'll have to add some config to the charm that takes s3 creds and an 'initial seed data' url
<jamesmitchell> yep - got that working now
<m_3> nice
<m_3> yeah, so it's worth playing with calling ec2 tools from the instance to see if you can attach a volume from within the instance
<m_3> that should be pretty easy to test out manually
<m_3> then add it to the charm config if that works out
<m_3> (please let me know!)
<jamesmitchell> ok - I will. Thanks for that
<m_3> jamesmitchell: please consider pushing your changes (pulling seed data from s3) back to the community charms
<jamesmitchell> funny you should say that... I have not been able to get the charm-tools to work on my Lucid system. Keep getting an m2 error
<m_3> SpamapS: lucid charm-tools?
<jamesmitchell> during checkout
<jamesmitchell> yes
<SpamapS> m_3: if they exist.. they're not well tested ;)
<m_3> jamesmitchell: that's not surprising
<m_3> unfortunately
<SpamapS> jamesmitchell: I think you mean mr right?
<m_3> jamesmitchell: just do `bzr branch lp:charms/nfs`
<SpamapS> jamesmitchell: mr changed drastically some time after lucid
<jamesmitchell> yes, sorry I meant mr
<SpamapS> jamesmitchell: We should probably have some kind of version detection so that doesn't break
<jamesmitchell> m_3: that is how I seeded my versions - direct to source :)
<m_3> mr's just a multi-repository checkout thingy... not much different than bzr branch
<SpamapS> jamesmitchell: part of the reason we haven't worried about that is that charm-tools is mostly a stop-gap until juju's backend store is setup.
<m_3> or `git bzr clone` :)
<SpamapS> m_3: shush, gitboi
 * m_3 making traction into the _second_ pot of coffee for the day
<SpamapS> m_3: once I drain #1, I always just run across the street and get a red bull ;)
 * SpamapS would be very sad if he didn't have a 7-11 45 steps away
<m_3> zul would be jealous
<zul> you know when im at home i dont drink redbull, you guys just drive me to insanity :P
<SpamapS> zul: s/drive me to insanity/get me to act like my true self/
<zul> heh im quite quiet without it
<m_3> gary_poster: ping
<_mup_> juju/env-from-env r457 committed by kapil.thangavelu@canonical.com
<_mup_> pickup juju env from environment
<_mup_> Bug #933165 was filed: Juju environment can be specified via environment variable. <juju:In Progress by hazmat> < https://launchpad.net/bugs/933165 >
<m_3> hazmat: whoohoo thanks! perfect precedence ordering too
<m_3> I'm totally gonna get the yellow team to propose cli plugins
 * m_3 see's how it is... :)
<SpamapS> ;)
<m_3> gary_poster: please ping me tomorrow when you get a chance, I've got some questions on the buildbot charm
<gary_poster> m_3, will do, thanks
#juju 2012-02-16
<_mup_> Bug #933214 was filed: juju cli api should timeout connecting to unix socket <juju:New> < https://launchpad.net/bugs/933214 >
<_mup_> juju/cli-api-unit-option r457 committed by kapil.thangavelu@canonical.com
<_mup_> unit specified via cli switch instead of positional
<hazmat> SpamapS, do you know the config for a sans networking container for lxc-create
<hazmat> perhaps just 'empty'
<m_3> SpamapS: 457 builds in the ppa... testing lxc now, but go ahead and upload that one
<hazmat> no.. that just makes a more isolated container
<hazmat> sans useful networking
<m_3> hazmat: it might be like libvirt where you use a real bridged interface br0, not virbr0 or lxcbr0
<hazmat> m_3, no.. the goal is to have it not clone the network namespace
<hazmat> so it lives in the parent network namespace
<m_3> right, understand... but that's what you do when you want a libvirt vm to share the parent's network
<m_3> totally different containment here in lxc
<hazmat> yeah.. it looks like  cloning the network namespace is hardcoded in lxc start.c
<hazmat> m_3, its more like a chroot
<hazmat>  share the parent network  is accurate as well
<m_3> right... wonder if other dev nodes would give any hints (like pts or something)
<hazmat> ah.. maybe the '' value instead of 'empty' does it
<m_3> ha
<hazmat> m_3, lxc source is my guide
<m_3> yup
<hazmat> hmm
<hazmat> upstart
<m_3> yay, precise lxc seems to be working now
<danwee> hello everybody
<danwee> hazmat can i ask you for help?
<danwee> can anyone help with juju ?
<koolhead11> danwee: please shoot your question am not an expert but can see
<danwee> ok, i m using orchestra server, when i try to deploy a machine for juju, using juju status , i get this error msg:
<danwee> 2012-02-16 13:50:27,260 INFO Connecting to environment. 2012-02-16 13:50:27,766 ERROR Connection refused Unhandled error in Deferred: Unhandled Error Traceback (most recent call last): Failure: txzookeeper.client.ConnectionTimeoutException: could not connect before timeout Cannot connect to machine MTMyOTAzODIwNy4xNDU0ODUwNjQuMjkzNzg (perhaps still initializing): could not connect before timeout after 2 retries 2012-02-16 13:50:57,316 E
<koolhead11> danwee: and are you using juju PPA repository?
<danwee> i installed juju on orchestra server oreinic 11.10 >sudo apt-get install juju, as suggested in this page :https://help.ubuntu.com/community/UbuntuCloudInfrastructure
<koolhead11> danwee: sorry am not the correct person to help you on that :(
<danwee> yesterday i had invalid ssh key, then i added the rsa key to the enviroment.yaml as hzmat suggested, but i got this other msg. unhandeled error thing
<danwee> but thanks for listening koolhead11
<koolhead11> danwee: can you paste juju -v status
<danwee> orchestra@orchestra:~/.juju$ juju -v status 2012-02-16 14:02:50,216 DEBUG Initializing juju status runtime 2012-02-16 14:02:50,240 INFO Connecting to environment. 2012-02-16 14:02:50,240 DEBUG Spawning SSH process with remote_user="ubuntu" remote_host="testata" remote_port="2181" local_port="34213". 2012-02-16 14:02:50,745:22380(0x7ffdaaec9720):ZOO_INFO@log_env@658: Client environment:zookeeper.version=zookeeper C client 3.3.3 2012-02-1
<danwee>     result = result.throwExceptionIntoGenerator(g)   File "/usr/lib/python2.7/dist-packages/twisted/python/failure.py", line 350, in throwExceptionIntoGenerator     return g.throw(self.type, self.value, self.tb)   File "/usr/lib/python2.7/dist-packages/juju/providers/common/connect.py", line 33, in run     client = yield self._connect_to_machine(chosen, share)   File "/usr/lib/python2.7/dist-packages/twisted/internet/defer.py", line 542
<koolhead11> danwee: please use paste.ubuntu.com
<danwee> i m not familliar with that
<danwee> a sec
<danwee> ok i did
<danwee> http://paste.ubuntu.com/844269/
<koolhead11> danwee: i am also stuck at some similar step in my loacl openstack infra 4 running juju
<koolhead11> :(
<danwee> yeah , sucks. isnt it
<koolhead11> danwee: am using my infra under proxy, even my CC is using proxy so am stuck i suppose
<danwee> are u trying to connect ur instances to EC2 ?
<koolhead11> no i have my on openstack infra and trying to use juju there
<danwee> mmm what server did you use to deploy openstack, i m curious
<koolhead11> ubuntu oneiric
<danwee> orchestra server ?
<koolhead11> nopes manuallly 2 node deplyment
<danwee> mmm so you followed the openstack documentation ,thats the hard way, at least you managed to deploy open stack
<danwee> with orchestra its upside down, you have to deploy cobbler first, then juju, and last openstack
<koolhead11> danwee: yeah i will try it with precious in few days!! :)
<danwee> :) good luck with that, keep up informed if things work up with you
<danwee> us*
 * koolhead11 needs a direct/minus proxy setup first :P
<gary_poster> m_3, hey.  I'm free for another 45 minutes if you can talk about the buildbot charms from now till then.  I will be free again for a bit after 1600Z or so, so we don't have to talk this second if that is inconvenient for you
<m_3> gary_poster: morning... lemme just grab some coffee
<m_3> oh, dang... just saw that was almost 45mins ago
<m_3> oops
<gary_poster> cool, m_3. I have a call in 9 but it probably won't last more than 20 minutes
<m_3> gary_poster: cool
<gary_poster> I'll ping you when I'm off
<m_3> thanks
<jcastro> SpamapS: m_3: https://trystack.org/
<SpamapS> jcastro: definitely need to test juju on that. :)
<m_3> jcastro: you get an acct?
<jcastro> just found out about it
 * jcastro thinks we'll need a "how to try juju on trystack" page.
<m_3> might not do the ec2 interface though...
<m_3> only see the openstack native so far
<jcastro> look how delicious it looks, using bootstrap
<jcastro> and we're chillin' in 1998 with a moin wiki. :-/
<m_3> SpamapS: yay... your 'juju commit' post got a bump
<jcastro> heh
<gary_poster> m_3, calendar-reading-fail.  My call is in another hour. :-) So, can talk any time now.
<gary_poster> calendar memory fail, to be more accurate
<m_3> gary_poster: hey... ok, so biggest question is surrounding use of buildbot
<gary_poster> cool
<m_3> please forgive my ignorance
<m_3> but in the charm, y'all're adding the script info as config params
<m_3> it seems them that you'll be either:
<m_3> spinning up new slaves per job and then destroying them
<m_3> or controlling everything remotely from the juju cli ('juju set script_xxx=blah')
<m_3> It seems (naive first glance) that it might be easy to control jobs if:
<m_3> each master has a pool of slaves up and running
<m_3> this master hands out jobs over relation channels
<m_3> I hope my confusion is clear :)... but maybe you can take a sec to describe or point me to more info as to how buildd runs?
<gary_poster> sorry, had distraction at home.  yes, AIUI this is the pattern Kapil uses in one of his Jenkins charms.  That would work in theory, but we have a significant wrinkle:
<gary_poster> setting up a slave can take > 2 hours
<gary_poster> For one of our tasks
<m_3> are they dedicated to a single task or can they be re-used?
<gary_poster> Dedicated
<m_3> oh
<m_3> hmmm...
<m_3> ok, that's a different story then
<gary_poster> Because that prep is specific
<m_3> wow... >2 hrs?
<m_3> dang
<gary_poster> So we figured that we would deploy the slave charm with different service names
<gary_poster> We haven't tried this yet but that's the plan :-P
<gary_poster> Because we know juju supports that
<m_3> how're jobs assigned/organized?
<m_3> does the master node really do it?  or is it external?
<m_3> or rather.... "what is the role of the master node in buildbot?"
<gary_poster> We wanted to make it completely flexible, so that the slave would say what steps it wanted to run.  This seemed more like a juju thing to do: "Hi master, I'm a new slave, and I'm prepared to do these sorts of things"
<gary_poster> But that fught against buildbot too much
<gary_poster> faught
<gary_poster> fought
<gary_poster> ugh
<m_3> seems like they exchange very limited information
<gary_poster> yeah
<gary_poster> the master has a buildbot config, which defines the kinds of things it tests (or runs).  These are "builds"
<gary_poster> When a slave joins, it tells the master which builds it is interested in participating in
<gary_poster> Multiple slaves joining for the same build simply acts as a high availability sort of thing: each slave participating in a given build is supposed to be indentical, according to buildbot
<m_3> but then it doesn't look like the master actually hands the slave anything to run...
<m_3> that information is communicated to the slave through the scripts_xxx config params?  or am I missing something?
<gary_poster> (You might also ask, btw, why are we using buildbot rather than jenkins; the answer is a combination of legacy and a directive to go forth and charm)
<m_3> ha!
<m_3> yeah, no problem with that... just trying to provide value in the review
<gary_poster> m_3, yeah, the master.cfg is the thing that defines stuff.  So there's an example to look at in the master.  Finding
<gary_poster> So take a look at examples/pyflakes.yaml in the master
<m_3> yeah, I saw that... trying to figure out how that information gets communicated to the actual slvae nodes
 * m_3 looking at pyflakes
<gary_poster> The dance is this:
<gary_poster> - We spin up a master with a given master.cfg
<gary_poster> This defines what builds are available to run
<gary_poster> - We spin up a slave, with any setup it might need
<gary_poster> - We tell the slave via juju set what builds it will be interested in
<gary_poster> - We make a juju relation between them
<gary_poster> - the master charm receives the builds that the new slave wants to participate in
<gary_poster> - the master charm delivers the name and password they should use as a handshake
<gary_poster> - now the buildbot master is restarted, to tell buildbot that there is a new slave interested in running one or more builds
<m_3> and buildbot slave gets the job information from the buildbot master directly (using the name/pw sent)?
<gary_poster> the buildbot slave is started, having been informed of the master ip, and the name and password to use
<gary_poster> - they join
<gary_poster> - when the master is ready to make a build using the slave, it directs the slave step by step per the build that they are working on
<gary_poster> sorry that took so long, but I had to think it through myself
<m_3> np... it helps!
<benji> m_3: right, the master and slave have their own communication channel and we just use relations for coordination
<gary_poster> yes
<gary_poster> I wanted to have slaves say "these are my steps!"
<gary_poster> but with buildbot config being written in Python that was getting kinda crazy
<m_3> cool... ok, thanks for walking me through it
<gary_poster> We should probably include something like that in the README
<m_3> I'll spin stuff them up for the dynamic part of the review process
<m_3> gary_poster: :)... I was gonna cut/paste it from here and recommend that
<gary_poster> cool m_3.  The first example in the README is reasonable to try.  Run away from the one that refers to "lp"
<gary_poster> :-)
<m_3> understood ;)
<m_3> gary_poster: I've got a handfull of stuff to do today and we're taking a long weekend for the wifes b-day.  expect the dynamic review to land early next week
<m_3> gary_poster: thanks!
<gary_poster> cool, understood.  Have a great weekend, and thank you
<m_3> bac: I got precise running precise lxc on juju457 from the ppa last night... should be good to go... thanks for debugging that
<bac> m_3, np.  glad we got it figured out
<bac> m_3, did you have test failures when building the ppa?  was there a fix for it or was it intermittent?
<m_3> bac: it was transient
<m_3> bac: looks like clint got 457 into the archive before feature freeze too
<SpamapS> FF is in ~ 3.75 hours btw
<SpamapS> oo I should probably upload charm-tools
<SpamapS> though, being in universe, it can wait some.
<_mup_> juju/deploy-upgrade r457 committed by kapil.thangavelu@canonical.com
<_mup_> charm publisher logs when using an already uploaded charm
<hazmat> gary_poster, btw thanks again for the feedback, i'm in progress on various fixes proposed on the list (deploy -u, env from environment, and upgrade -f)
<hazmat> SpamapS, feels like jitjsu should use a different name then 'juju' for the wrapper
<hazmat> else its pretty magical implicit
<hazmat> bummer about the no network lxc full containers, i was hoping that would work
<gary_poster> hazmat, awesome, that sounds great!  thank you
<jcastro> SpamapS: did you try trystack yet?
<_mup_> juju/deploy-upgrade r458 committed by kapil.thangavelu@canonical.com
<_mup_> deploy accepts a -u/--upgrade flag
<_mup_> Bug #933695 was filed: Deploy now accepts an upgrade flag. <juju:In Progress by hazmat> < https://launchpad.net/bugs/933695 >
<SpamapS> jcastro: feature freeze week man.. ;)
<SpamapS> hazmat: I'm totally open to changing the juju-jitsu wrapper command to something else. I intentionally kept it the same so that it extends juju, rather than replaces it.
<m_3> SpamapS hazmat: CLI plugins...
#juju 2012-02-17
<danwee> hello everybody, need help as usual for juju
<danwee> i copy pasted the error i m getting if anyone be kind to check it out : http://paste.ubuntu.com/844334/
<danwee> anybody can help with juju ?
<koolhead11> danwee: try mailing list. As not every developer is present every time, because of the timezone constrain
<danwee> mmm ok
<jamespage> anyone having problems using the local provider with juju from ppa on precise today?
<jamespage> can't seem to get more that one lxc instance started :-(
<koolhead11> jamespage: the juju path migh have log about the services you wanted juju to execute. the container log might help :)
<jamespage> hmm; well container.log does not contain anything useful
<jamespage> I can see from the unit logs that the first lxc instance is fine;
<jamespage> I then: juju add-unit
<jamespage> the second instance never manages to connect to zookeeper
<jamespage> the first instance drops its connections
<jamespage> and I can't access either the first or second instance :-(
<jamespage> I think the second lxc instance networking trounces the first - I get DUP pings back
<jamespage> hmm - both instances have the same mac address - that sounds bad...
<jamespage> looks like the template configuration gets a hwaddr which then propagates elsewhere
<jamespage> ah - I see - lxc-clone does not scrub the hwaddr
<jamespage> oooppps
<danwee> bitte helfen sie mie mit juju
<danwee> help needed with juju
<jamespage> danwee, what help do you need?
<danwee> jamespage, when i try to connect juju to a machine i get this error http://paste.ubuntu.com/844334/
<danwee> juju installed on orchestra server
<jamespage> danwee, OK - so what command did you run previous to that one?
<jamespage> juju bootstrap maybe?
<danwee> juju bootstrap
<danwee> and it finished successfuly
<danwee> any ideas
<hazmat> jamespage, oops indeed
<jamespage> hazmat, bug 934256 raised
<_mup_> Bug #934256: lxc-clone should replace/change hwaddr when cloning containers <amd64> <apport-bug> <precise> <lxc (Ubuntu):New> < https://launchpad.net/bugs/934256 >
<jamespage> danwee, OK - so that just means it initiate the process - not that it completed
<jamespage> can you see that the remote host being used at the bootstrap node is running?
<danwee> yes its running, i net booted it with cobbler and aquired by juju
<danwee> it can see the node , but you can see on the log that the server itself is refusing the connection to the client
<danwee> my guess is that juju is the client , and the server refusing the connection juju trying to make
<danwee> why , i dont know
<jamespage> danwee, so the juju client is trying to connect to "testata" on port 2181
<jamespage> thats the bootstrap node
<jamespage> can you see if that is up and running?
<danwee> jamespage, i think juju client trying to connect to the orchestra server on port 53402, but is failing , i think that after the juju command , the server ssh to the machine and succeeds but when juju tried to make a connection to this machine using the local port 53402 on the server inorder to connect to the remote port , the server refuses the connection from juju
<jamespage> danwee, it would indicate that zookeeper is not running on the boostrap node
<jamespage> rather than the node not running at all
<jamespage> weird
<jamespage> can you ssh to testata and take a look
<danwee> take a look at ?
<danwee> i got to go, you take care and thanks for help
<danwee> we ll be in touch
<jamespage> bye :-)
<_mup_> Bug #934350 was filed: Upgrade charm should work from any state <juju:New> < https://launchpad.net/bugs/934350 >
<danwee> hello, need help in juju
<SpamapS> danwee: what can we do for you?
<danwee> SpamapS, tell me what you think  http://paste.ubuntu.com/844334/
<danwee> SpamapS, any ideas ?
<SpamapS> danwee: sorry I'm on phone calls
<danwee> SpamapS, guess no help is coming
<hazmat> SpamapS, do you have manual tests in any of the charms extant?
<hazmat> SpamapS, nevermind i see some
#juju 2012-02-18
<negronjl> SpamapS: ping
<SpamapS> negronjl: pong!
<negronjl> SpamapS: I'm trying to promulgate cloudfoundry-server but, promulgate complains about not being able to find the LP branch from the bzr branch.
<negronjl> SpamapS: How do I do this manually ?
<negronjl> SpamapS: The error that I get is: ERROR:can't determine Launchpad branch from bzr branch
<SpamapS> negronjl: you need to push --remember lp:~charmers/charms/cloudfoundry-server/trunk
<SpamapS> negronjl: you probably had it somewhere else than /charms, so promulgate is confused.
<negronjl> SpamapS: I was thinking of pushing to lp:~charmers/charms/oneiric/cloudfoundry-server .... would that work?  This is so when I bzr branch, I don't end up with a trunk directory but a cloudfoundry-server one .... thoughts ?
<SpamapS> negronjl: oh actually you probably have to include oneiric
<SpamapS> negronjl: no, you have to call it trunk
<SpamapS> negronjl: you should be branching lp:charms/cloudfoundry-server though, not the underlying path
<negronjl> SpamapS: so, bzr push --remember lp:~charmers/charms/oneiric/cloudfoundry-server/trunk then correct ?
<negronjl> SpamapS: thanks for the help .... that worked
<SpamapS> negronjl: AND WOOT!!!
 * SpamapS rings the bell of promulgation
<danwee> hello , need help with juju
<danwee> hazmat, you are the one who can help me
#juju 2012-02-19
<danwee> hello, i need help with juju, anyone can take a look : http://paste.ubuntu.com/844334/
#juju 2014-02-10
<IVBaker> hi, I'm new user of juju and I have been following this tutorial: http://blog.xtremeghost.com/2012/11/lets-shard-something.html?showComment=1392004063214#c2693168348529963830. The problem is, I can't do mongo --host <address>:27021
<IVBaker> it returns: couldn't connect to server ec2...:27021 at src/mongo/shell/mongo.js:147
<IVBaker> exception: connect failed
<_sEBAs_> im having an issue with using juju-local in an ubuntu trusty version
<_sEBAs_> the problem is with mysql its giving me an error with innodb buffer
<_sEBAs_> I give it a look in the net, but all that i tried nothing works, i think im gona go back to precise
<_sEBAs_> seems to be a kernel version thing
<IVBaker> hello, can someone help me. I'm running a mongodb sharded instance (following a tutorial) and I block on the last step: mongo -host address:27021
<JoshStrobl> hey marcoceppi: Thanks for your help earlier today / last night. Found and resolved all the issues, filed to have my charm merged upstream! https://bugs.launchpad.net/charms/+bug/1278369
<_mup_> Bug #1278369: Merge Metis Charm: lp:~truthfromlies/charms/precise/metis/trunk <new-charm> <Juju Charms Collection:New> <https://launchpad.net/bugs/1278369>
#juju 2014-02-11
<vila> hi there, I encounter issues during 'juju bootstrap' on canonistack, I noticed that two security groups are created at that time, one of them being empty (according to 'nova secgroup-list-rules juju-cs-0', is that expected ?
<vila> specifically, juju bootstrap says: 'Attempting to connect to 10.55.32.3:22' and sits there until a 10m timeout expires
<vila> mgz: ping ^ I have a gut feeling it's related to ssh keys and chinstrap but I can't find the best way to debug that :-/
<vila> juju debug-log says the connection times out too so I'm blind
<vila> and FTR, juju bootstrap fails with: '2014-02-11 09:35:13 ERROR juju.provider.common bootstrap.go:140 bootstrap failed: waited for 10m0s without being able to connect: Received disconnect from UNKNOWN: 2: Too many authentication failures for ubuntu'
<vila> axw_: I think I may be bitten by https://bugs.launchpad.net/juju-core/+bug/1275657 in the above ^ can you help me verify that ?
<_mup_> Bug #1275657: r2286 breaks bootstrap with authorized-keys in env.yaml <bootstrap> <regression> <ssh> <juju-core:Fix Committed by axwalk> <https://launchpad.net/bugs/1275657>
<vila> I'm using 1.17.2 from a saucy client by the way
<vila> ha damn, wrong TZ to reach axw_ :-/
<vila> mgz: not around ?
<mgz> hey vila, am now, was just in standup
<vila> mgz: cool ;)
<mgz> what juju version are you using?
<mgz> and can you actually route to that 10. address?
<vila> mgz: 1.17.2 acoording to --show-log
<vila> mgz: depends on the routes, let me recheck, nova list always shows the instance for one
<vila> mgz: ha, just got the timeout, so from memory:
<mgz> bootstrap on trunk requires ssh, which requires working forwarding
<vila> I could reach it via chinstrap at one point
<vila> mgz: it was working this week-end with an already bootstrapped node (so the routing was working)
<mgz> you probably need to set authorised-key-path to your canonical ssh key that chinstrap accepts
<vila> at that point that is, then I wanted to restart from a clean env, destroy-env
<vila> mgz: that would make sense according to the bug but why did it worked before ?
<mgz> oh, and when in doubt, delete ~.juju/enviornments/*.jenv files
<vila> yeah, delete that, nova delete, even swift delete at one point
<vila> trying again with my canonistack key in authorized-keys-path
<vila> mgz: and about that empty secgroup ?
<mgz> the per-machine ones are empty until a charm set some ports and `juju expose` is run
<vila> mgz: ha great, makes sense, thanks
<vila> hard to guess though ;) But makes perfect sense
<vila> ok, bootstrap running, instance up, can't connect via ssh even using -i with my canonical key (too early may be, re-trying)
<vila> mgz: using ssh -v, getting the host key (so the ssh server is up right ?), all my keys are attempted, none work
<mgz> hmpf
<vila> mgz: and what's the config key to reduce that 10mins timeout ?
<mgz> you can just interrupt,n?
<vila> hmm, looks like it's ok now, I remembered going into a weird state where I had to cleanup everything (that's how I discovered the swift delete...)
<vila> mgz: slight doubt, I had to generate a pub version of my canonistack key with 'ssh-keygen -y', that's the right command ?
<mgz> why?
<mgz> that's not right.
<mgz> chinstrap knows nothing about any canonistack keys, you need one that chinstrap allows
<vila> mgz: hold on,
<vila> mgz: I need the pub one for authorized-keys-path: ~/.canonistack/vila.key.pub in env...s.yaml, and I need the private in in .ssh/config for Host 10.55.60.* 10.55.32.*
<vila> mgz: chinstrap itself is ok, I can ssh to it
<vila> mgz: probably from my launchpad key known by my ssh agent (yup confirmed)
<mgz> right, just use the launchpad key
<mgz> in juju config
 * vila blinks
<vila> mgz: not sure that trick will work for our other use cases, but let's see if it works for me first
<vila> mgz: nope, same behavior
<vila> mgz: any way to set 'ssh -vvv' for juju attempts ?
<mgz> if you poke the source probably
<vila> damn it, scratch that last attempts, the .jenv is not updated when I modify my envs.yaml !
<mgz> no
<mgz> delete it after failed bootstraps :)
<vila> mgz: ok, so I'm in with.. and it's proceeding
<vila> pfew, at least I'm behind that wall
<vila> mgz: thanks, a few questions then
<vila> mgz: using the same key everywhere won't match all my use cases, was it just for debug or is it a hard requirement or.. why ? ;)
<mgz> because we go through chinstrap, you have to give juju a key that chinstrap will accept
<mgz> that's all really
<vila> mgz: hold on, juju relies on ~/.ssh /config right, so whatever I say there is not juju concern
<vila> mgz: I mean, juju doesn't know it has to go thru chinstrap
<mgz> no, but it is supplying a given key, that needs to be accepted by the bouncer
<vila> mgz: and now it's juju destroy-env that is unhappy :-(
<mgz> apparently that overrides the one in the config block for the forwarding agent
<vila> oh wait, my ssh/config still specifies my canonistack key so it seems that bootstrap and destroy-env use different tricks 8-/
<vila> mgz: and juju debug-log has the same issue (.ssh/config for chinstrap bouncer fixed to use the lp key)
<vila> mgz: so connecting with ssh to the state server works, authorized keys there says juju-client-key, vila@launchpad, juju-system-key (all prefixed with Juju:)
<vila> but destroy-env and debug-log hang
<mgz> destroy-env doesn't use ssh at all
<mgz> you just want the sshuttle tunnel up for that
<vila> rhaaaaaaaaa
<mgz> so it can talk over the juju api
<vila> always sshuttle after succesful bootstrap, damn it
<mgz> having fun yet? :)
<vila> mgz: hehe, yeah, we're automating stuff but I seemed to be the only one that couldn't bootstrap anymore
<mgz> you have too many keys :0
<vila> which kind of limits the fun ;)
<vila> mgz: I was taught this was a good thing ?
<vila> mgz: and I even got a *new* one when introduced to canonistack...
<mgz> yeah, I don't use that
<mgz> just give juju my one that works on chinstrap
<mgz> (well... and set agent forwarding so my *other* key can be used to talk to launchpad so bzr works... so it's not all simple)
<vila> mgz: so if I want to allow several keys from the juju instance, I need to switch from authorized-keys-path to authorized_keys and generate a proper content right ?
<mgz> just using one really is best
<mgz> but yeah, you can supply multiple
<vila> ha good, I need to check which one I need exactly but at some point I have to create a nova instance and tell it which key to use, all was fine until now :-}
<vila> mgz: anyway, I'm unblocked for now, thanks for the help !
<mgz> :)
<vila> mgz: I'm reading lp:juj-core log, how should I interpret: "Increment juju to 1.17.3." as opening .3 or releasing it ? (I suspect opening, so later revision will be part of 1.17.3, is that right ?)
<mgz> vila: opening
<vila> yes !
<vila> mgz: So i think my issue may be fixed by revno 2293 but introduced by revno 2286, I'll live with a single ssh key until 1.1.7.3 is released and re-test then
<vila> mgz: what's the best time to chat with Andrew Wilkins ?
<jcastro> hey fellas so what's the TLDR on HPCC
<jcastro> looks like Xiaoming has done the fixes the last review asked for
<marcoceppi> jcastro: looks like it was added back yesterday
<marcoceppi> jcastro: they're still not addressing the validation portion. It will only validate if you provide it a sha1sum via cfg, so if you don't do that it falls back to just downloading packages
<marcoceppi> I can tell now it's not going to pass for charm store inclusion
<lazyPower> marcoceppi: you doing the next review?
<marcoceppi> lazyPower: I could
<lazyPower> I bet XaoMing is tired of reading my review text by now ;)
<lazyPower> *Xiaoming
<marcoceppi> I'll give it a go in a bit
<bloodearnest> working on adding tests to gunicorn charm
<bloodearnest> I have unit tests in ./tests
<bloodearnest> I want to add some functional tests before proposing
<bloodearnest> should I use amulet, or 'charm test', or some combination?
<bloodearnest> and should I put the unit tests somewhere else?
<marcoceppi> charm test is just a test runner
<marcoceppi> bloodearnest: CHARM_DIR/tests is reserved for functional tests
<bloodearnest> marcoceppi: ok will move it
<marcoceppi> unit tests should either go in hooks or elsewhere
<bloodearnest> marcoceppi: so can I write a test in CHARM_DIR/tests using amulet that will be run by 'charm test'
<marcoceppi> bloodearnest: correct
<bloodearnest> marcoceppi: sweet. Do you know of any charms that have such tests I can use an example?
<marcoceppi> you can write tests in any language, but we recommend amulet
<marcoceppi> bloodearnest: yeah a few, one sec
<gnuoy> Does anyone have a moment to help me figure out why I can't get my env to find the juju-tools in a private openstack cloud ? I've uploaded to a swift bucket and created the keystone endpoints but I keep getting ERROR juju supercommand.go:282 no matching tools found for constraint. This is juju 1.16.5
<mgz> gnuoy: on bootstrap I presume? can you pastebin a --debug log?
<gnuoy> mgz, juju-metadata validate-tools
<gnuoy> will do
<roadmr> hey folks! the cs:precise/openstack-dashboard-13 charm installs openstack grizzly's horizon but also installs an incompatible version of django (1.5.4) from some ubuntu-cloud repository :( should I file a bug about this or is this fixed somewhere, somehow?
<mgz> roadmr: bug 1240667
<_mup_> Bug #1240667: Version of django in cloud-tools conflicts with horizon:grizzly <charms> <cloud-archive> <cts-cloud-review> <packaging> <regression> <ubuntu-cloud-archive:Confirmed> <juju-core:In Progress by dimitern> <https://launchpad.net/bugs/1240667>
<mgz> and dimitern is working on it today
<bloodearnest> gnuoy: agy had this issue last week on prodstack
<dimitern> roadmr, i'm just testing the fix now on ec2
<dimitern> roadmr, what did you deploy after bootstrapping?
<gnuoy> mgz, I'll catch up with agy rather than use up your time (the log will take a few minutes to cleanup anyway as its doing fun things like displaying passwords in clear text)
<gnuoy> bloodearnest, thanks
<mgz> gnuoy: sure, poke me again if you need to
<gnuoy> thanks, much appreciated
<bloodearnest> gnuoy: it was after an juju env upgrade from 1.14 -> 1.16 IIRC, juju was looking in the wrong place for the tools.
<bloodearnest> or rather, the tools were not where juju was looking
<gnuoy> bloodearnest, that sounds very like my situation
<roadmr> dimitern: oh! I hadn'd looked for this in the cloud-archive project! I just did "juju deploy openstack-dashboard" on a maas provider (all nodes installed with Ubuntu 12.04, save for the maas controller which is trusty)
<roadmr> dimitern: all other openstack components are already deployed but I was getting the server error when trying to access horizon
<dimitern> roadmr, ok, thanks for the info, i'm trying now
<roadmr> dimitern: awesome, thanks :)
<gnuoy> mgz, this is the last part of the output from validate-tools http://pastebin.ubuntu.com/6915759/
<gnuoy> I see it looking up the endpoint for juju-tools correctly
<gnuoy> and then is seems to query the index files and give up
<gnuoy> The auth url is the url of the public bucket I created and pointed the endpoint at fwiw
<lazyPower> ev: ping
<blackboxsw> hi folks, should I expect that relation-data still be present during the *-relation-departed hook?  for instance. if I relation-set blah=2 in the relation and that relation-departed hook fires, should I be able to relation-get blah during teardown?
<blackboxsw> ..currently I'm seeing permission denied on any relation-get calls from the departed hook
<marcoceppi> blackboxsw: I thought you could, but it's not reliable
<blackboxsw> marcoceppi, cool thanks, I was thinking I might persist a json file containing information I need for service teardown on the unit that needs it during *-departed hook. Hopefully that approach sounds reasonable. I can't think of anyway w/ juju to ensure I always have the data I need to properly teardown the service
<blackboxsw> so, I'd setup the persistent file during *-relation-changed or *-relation-joined and reference it during *-departed
<marcoceppi> blackboxsw: caching relation data to a dot file seems fine
<marcoceppi> within $CHARM_DIR
<blackboxsw> roger thanks marcoceppi
<marcoceppi> blackboxsw: IMO you should be able to run relation-get in relation-departed
<marcoceppi> do you have logs from when you get the perm denied?
<blackboxsw> marcoceppi, I'll deploy again and grab what I get from debug-hooks.
<marcoceppi> blackboxsw: cool, thanks!
<blackboxsw> thank you. will be about 30 mins though.... and I'll pastebin it
<blackboxsw> marcoceppi, sorry that took so long: https://pastebin.canonical.com/104649/   here's a pastebin showing that the departed hook doesn't have access to relation data that should be available
<marcoceppi> mgz: should the departed relation hook have access to userdata? I was under the impression it should
<KarielG0> hello
<marcoceppi> blackboxsw: from what I understand (and have been telling people for the last few years), this is a bug
<marcoceppi> imo, departed should have relation-get access, and broken should have no relation access
<marcoceppi> KarielG0: hello o/
<blackboxsw> dpb1, pointed me at  https://juju.ubuntu.com/docs/authors-relations-in-depth.html   which seems to say relation-data should exist until the last unit depars
<blackboxsw> departs
<KarielG0> is going the Q&A going to be here or they haven't changed the channel?
<CheeseBurg> is this the right channel for the Jono thing?
<blackboxsw> hmm not sure which Q&A you are referring to KarielG0. I just fired a couple questions into the room as the experts are here :)
 * blackboxsw checks around
<Levan> I like pie
<jono> Q&A is in #ubuntu-on-air - just reload the page and you can join the channel again
<jono> sorry about that
<Levan> we can see you
<blackboxsw> thx jono
<linuxblack> hey calendar app
<dpb1> blackboxsw: agreed that the docs getting it right are important. :)
<blackboxsw> marcoceppi, agreed. yeah I don't find any bug against relation-departed in juju. I'll file one
<linuxblack> QUESTION; is there going to be a calendar app and weather app for 14.04
<marcoceppi> blackboxsw: well that was written with my understanding of the relation stuff, so I could be wrong
<marcoceppi> linuxblack: go to #ubuntu-on-air
<marcoceppi> blackboxsw: either way a bug would be good
<blackboxsw> https://bugs.launchpad.net/juju-core/+bug/1279018 filed for clarity
<_mup_> Bug #1279018: relation-departed hook does not surface the departing relation-data to related units <juju-core:New> <https://launchpad.net/bugs/1279018>
<blackboxsw> thanks marcoceppi
<aquarius> marcoceppi, ping about juju charm and consolidation
<marcoceppi> aquarius: pong
<aquarius> marcoceppi, OK, as you know, I have discourse running on azure through juju
<aquarius> and it's a bit too xpensiv
<aquarius> expensive
<aquarius> I'm using Â£75 of compute hours a month
<marcoceppi> right
<aquarius> what I'd like to do is reduce that
<aquarius> It's deployed three VMs, and three "cloud services", whatever they are
<aquarius> now, it's possible that if that were all on one machine I'd still be paying the same amount
<aquarius> but I wonder if that's not the case
<aquarius> so: I come to you to ask whether you have any ideas
<marcoceppi> aquarius: probably not, a cloud machine is a the VM, with auto-dns stuff
<marcoceppi> aquarius: so, we have containerization within juju that works OK, not sure how well it works on azure
<aquarius> since a hundred pounds a month to run a not-very-busy-at-all discourse forum is insanely expensive by comparison with renting the most rubbish machine from, say, digitalocean :)
<marcoceppi> aquarius: but you can do something like, juju deploy --to lxc:0 postgresql; juju deploy --to lxc:0 cs:~marcoceppi/discourse
<aquarius> discoursehosting.com costs a lot less than that, and that's a managed service. :)
<marcoceppi> aquarius: you could just rent a bunch of "rubbish" machines and use juju manual provider
<rick_h_> aquarius: can also check out hazmat's https://github.com/kapilt/juju-digitalocean if you're interested in trying it out in DO
<marcoceppi> aquarius: well, we never said deploying to the cloud was cheap ;)
<aquarius> marcoceppi, I suppose my thought is more this question: is there something I've done wrong in the setup, or something the charm does wrong in the setup... or is it actually the case that using juju to host a discourse forum on Azure is *expected* to cost approximately eight times as much as managed discourse hosting for it? :)
<aquarius> because if the answer really is the latter then this is something of a blow to the juju/azure model, I have to tell you :)
<marcoceppi> aquarius: you can do service co-location on a single machine, which will cut down costs from three machiens to one
<roadmr> something like juju deploy --to ?
<marcoceppi> right, but just using --to will do a bit of hulk smashing, using juju deploy --to lxc:0 or --to kvm:0 (soon) will set up containers on that machine
<rick_h_> marcoceppi: do you have to setup the containers first? or will it auto create if it doesn't exist?
<marcoceppi> aquarius: also, each cloud provider costs differently, aws v hp cloud v azure, etc
<marcoceppi> rick_h_: it will autocreate if you just do lxc:0 that will create 0/lxc/#
<rick_h_> aquarius: there's two levels of colocation and then there's just using a cheaper provider as well.
<aquarius> marcoceppi, I was wondering about that. I have two questions about it, though. First question: do I have to wipe everything and redeploy from scratch to do that, or can I "migrate" from my current set up to it? Second question: in your opinion, will that actually be cheaper? (Or will I just use 3x the CPU on one machine and pay the same?)
<rick_h_> so lots of options
<aquarius> rick_h_, part of the reason we're using Azure is that MS kindly sponsored the forum
<rick_h_> aquarius: it'll use the cpu of the combined services. If all three machines are running 90% cpu there's not going to be a good way to run on one box
<marcoceppi> aquarius: well, compute hours are typically billed based on frequency of cpu scheduler, but are typically $/hour of usage in general
<rick_h_> if they're all running a load of .1 then you can colo them fine as long as your comfy with all the services sitting on one VM
<aquarius> rick_h_, I'm not sure how to work this out, but I'll bet you a fiver right now that those machines are idle at the moment ;)
<marcoceppi> aquarius: yeah, they're pretty idle, discourse probably uses the most of all, and it's pretty well behaved
<marcoceppi> aquarius: you'll have to do manual data migration from postgresql, etc
<rick_h_> aquarius: then yea, I'd just look at colo'ing them. The deployer format supports colo'd units so I'd take your current environment, export a bundle, tweak it to be colo'd, and try to bring it up side by side with your current envionment
<aquarius> what concerns me more is that I don't know whether what we're basically saying here is "don't use juju to deploy discourse to Azure because it's just really really expensive" or whether we're saying "you should have used the charm in the following way, $alternativeway, and you didn't"
<rick_h_> and then if that's cool, bring the new colo up along side the live, replicate pgsql, and go
<marcoceppi> basically, stand up a new azure environment, copy database + files over to new deployment, point public IP address to new site
<marcoceppi> tear down old deployment
<aquarius> I really, really, really do not want to tear everything down to zero and redeploy from scratch and restore postgres backups :( I am no sysadmin :(
<rick_h_> aquarius: we're saying that juju's default use is services at scale and elasticity. That's more $$ ootb. You can go for less $$ by using juju's containerization to colo services.
<aquarius> especially since the very act of deployment will, I imagine, use up all my remaining CPU time for the month...
<rick_h_> aquarius: the Gui is starting work on a visual helper for doing service placement over the next couple of months
<marcoceppi> rick_h_: still doesn't solve data migration from within charms
<aquarius> OK. Well, that's clear at least, even if it's not the answer I was looking for. :)
 * marcoceppi goes back to pondering about a backup/restore charm
<rick_h_> marcoceppi: no, but pgsql supports replication correct?
<rick_h_> so that's something the charms can/should do anyway
<rick_h_> but yea, there's some migration pain in there, but it can be scripted as well
<marcoceppi> rick_h_: yes, you could add-unit --to lxc:0 then remove the postgresql/0 unit after migration
<marcoceppi> I guess that *could* could
<marcoceppi> work*
<rick_h_> marcoceppi: right, it's not as simple as it could/should be...but it's not impossible either
<marcoceppi> well, postgresql also dumps nightly backups too, which makes it easy to restore
<marcoceppi> but yeah
<rick_h_> so aquarius, yes it's $$ to use any cloud doing one machine per service unit. Yes there's a way to avoid that using colocation. Ugh that migration tooling to help you turn one into the other isn't available as a super easy tool for you.
<rick_h_> and there's stuff in the work that makes this easier in the future, but doesn't help you this weekend
<aquarius> *nod*
<phaser> hello guys! where can a new developer find the codebase for juju, if he wants to contribute to the cause?
<sarnold> phaser: hello :) I believe this is where all the development happens: https://code.launchpad.net/juju-core
<phaser> thank you for the quick response :)
<hazmat> aquarius, rick_h_ re do provider not ready yet.
<bdmurray> i'm trying to bootstrap an environment and I'm getting an error about it being already bootstrapped but juju destroy-environment says there are no instances
<bdmurray> How can I resolve this?
<marcoceppi> bdmurray: what's the name of the environment?
<bdmurray> marcoceppi: the name? Its called openstack in environments.yaml
<marcoceppi> bdmurray: then what you'll want to do is rm -f ~/.juju/environments/openstack.jenv
<marcoceppi> then try to bootstrap
<bdmurray> marcoceppi: that still didn't work
<marcoceppi> bdmurray: run `juju destroy-environment openstack` again, then delete the .jenv (again) then change the control-bucket name in the environments.yaml file to something else (doesn't matter what) then try bootstrapping again
#juju 2014-02-12
<hloeung> bdmurray: delete the provider-state and bootstrap-verify files in the control bucket
<vila> axw: hi there, still around ? (Replying to your mail while you pong ;)
<axw> vila: hiya, I am
<vila> axw: almost there
<vila> axw: replied for the questions in the first part of the mail, that leaves us with the config you want me to try, let me re-phrase to make sure we're on the same page
<axw> sure
<vila> no , in fact, I can't ;) So, when you say "Could you
<vila> try bootstrapping, then try to manually ssh with the options above and "-v"
<vila> to see what keys ssh is trying?", what config and what options do you mean ?
<axw> I mean try running the ssh command I pasted:  ssh -v -i ~/.juju/ssh/juju_id_rsa -i ~/.ssh/id_rsa -i ~/.ssh/id_dsa -i ~/.ssh/identity <host>
<axw> I realise they're not relevant to you, but that's what juju's doing
<vila> axw: I realize the config that was working doesn't make a lot of sense given your explanations, but it was working which blurry what I should try
<vila> oh, right
<axw> it *should* be working if you've got the private key loaded in ssh-agent
<axw> those -i's are just so we try the defaults in case they're not also loaded into an agent
<vila> so, restore a config with one key for chinstrap and a different one in authorized-keys-path ?
<axw> yes please
<vila> ha, loaded in the agent, hmm, hard to say if/when that was the case, let me setup that
<axw> I will try the same to see if I can reproduce it
<axw> you said in your email it was though?
<axw> do you mean it was but maybe is not now?
<vila> yeah, there is a grey area during which I suspect I upgraded juju to 1.17.2 while my node was still bootstrapped
<vila> running $ juju bootstrap --show-log
<vila> damn it it works
<axw> doh :)
<vila> and so does the ssh command above
<axw> if it does fail again, just confirm that the agent has/n't got the key loaded
<vila> ssh-add -l lists the key in authorized-keys-pat
<vila> h
<vila> axw: I will but it's a passwordless key so I think those don't have to be added to the agent, will re-try with a clean one
<axw> vila: it does if you want ssh to try it. otherwise it needs to be specified with -i, or in ~/.ssh/config
<vila> axw: well, it is specified in my ~/.ssh/config in the section that goes through chinstrap
<axw> the authorized-keys-path one?
<vila> yes
<vila> re-trying after 'ssh-add -D'
<vila> queried for my lp key
<vila> still working
<vila> queried while bootstrap I mean, trying your ssh command
<vila> Host key verification failed.
 * vila fixes known_hosts
<axw> vila: just pass -o StrictHostKeyChecking=no
<vila> ssh works too
<axw> you'll get that a lot otherwise
<vila> huh, OMG:
<vila> apt-cache policy juju-core -> 1.16.3-0ubuntu0.13.10.1~ctools
<vila> wth
<vila> doh
<axw> doh :)
<vila> I'm on the machine-0 host ;)
<vila> right, on my desktop: 1.17.2-0ubuntu1~ubuntu13.10.1~juju1
<vila> from http://ppa.launchpad.net/juju/devel/ubuntu/ saucy/main amd64 Packages
<axw> so if that's working, I'm really not sure why it wasn't working before
<vila> axw: gee, me neither but it sure wasn't, there was an error initially (using one key in ~/.ssh/config and a different one in envs.yaml) but I did align and reproduced the error (without ssh-add -D though but I can't see the link). Then I used the same key and it worked
<axw> I'll play around with my keys and see if I can reproduce it
<vila> axw: let's call it unreproducible for now
<vila> axw:  is there some place I can check for logs about some ssh -vv juju may have internally ?
<vila> axw: or some way to pass -vv to juju and get some output ?
<axw> vila: nope, stupidly I did not add logging for that bit
<vila> axw: bah, I guess, using the ssh command (with nohostchecks) is close enough right ?
<axw> yes, and the "-i" args
<axw> they're important because they change behaviour of how default keys get picked up and so forth
<vila> axw: sure, thanks for that one
<vila> axw: yup, the commit msgs were very good to point me in that direction
<vila> axw: I'm adding automated tests around bootstrap anyway, so I'll know if I run into that again
<axw> great, thanks
<vila> axw: thanks to you, enjoy your evening !
<axw> cheers, and have a nice day :)
<cargill> hi, after having some problems with an existing vagrant env, I removed it (removing Vagrantfile, ./vagrant, ~/.vagrant.d and ~/Virtualbox VMs), then set it up again and I cannot do juju ssh or debug-log anymore (in the freshly created environment)
<cargill> this makes juju slightly useless and me stuck because I have no idea whatsoever how to find out what went wrong (if it is #1202682, I do not understand how sinzui divined that it should be an ssh issue, so I cannot say I'm suffering from that)
<_mup_> Bug #1202682: debug-log doesn't work with lxc provider <cts-cloud-review> <debug-log> <local-provider> <papercut> <ssh> <ui> <juju-core:In Progress by themue> <https://launchpad.net/bugs/1202682>
<cargill> oh, and /var/log/juju-setup.log logs a few different errors/warnings but I can still deploy (some) charms, so they might be inconsequential
<TheMue> cargill: never worked with vagrant, the mentioned issue is when using the local provider
<TheMue> cargill: but you could paste your setup log to have a look
<cargill> TheMue: which the vagrant image sets up (it's a virtualbox machine that has juju+local set up)
<cargill> documented here https://juju.ubuntu.com/docs/config-vagrant.html
<TheMue> cargill: ah, based on lxc, so at least the ssh/debug-log problem could be the same. the local provider doesn't uses an explicit container for state and api. so ssh doesn't work here.
<TheMue> cargill: but the log exists at $JUJU_HOME/<ENVNAME>/log/all-machines.log
<cargill> if that's the case, that would explain debug-log not being able to run, but would not explain the failure to ssh to other containers
<cargill> this is juju-setup.log, this time for a precise vagrant box http://pastebin.com/zJ88WR6T
<cargill> TheMue: there is no all-machines.log file anywhere on the box
<TheMue> cargill: hmm, so it is more different from local provider than thought
<cargill> if I understand the documentation, it should use a vanilla local provider, so maybe something is horribly broken, just not showing in the juju-setup.log?
<cargill> yet nothing matching /error/i in the syslog
<cargill> or anywhere relevant in /var/log
<cargill> must be something wrong with my setup (I wish juju just worked on Debian directly without an extra level of virtualization)
<cargill> Any thoughts on how to find out what's wrong/how to work around this?
<cargill> funny, I can "juju ssh postgresql/0" but not "juju ssh 2" where 2 is the machine that postgresql/0 uses
<cargill> why does postgres node see connections from another node as coming from 1.0.3.1 instead of the node's IP (10.0.3.86)?
<dimitern> cargill, it seems you're using the local provider - its ip addresses are like 10.0.3.x, so this could be the host machine's ip in the lxc bridge
<cargill> yes it is
<dimitern> cargill, so that's why - the host's ip is also the default route on the lxc subnet
<cargill> dimitern: this makes relations to postgres not work, since only the other node's IP gets whitelisted, not the host's IP
<cargill> is this a bug?
<dimitern> cargill, container networking is still not implemented, so you might expect issues like this
<cargill> ah, ok, so a known missing feature and need to work around that, fine with me, is there a list? :)
<dimitern> cargill, and manual steps might be needed for charms to work, like setting config options etc. from the command line
<dimitern> cargill, juju-dev@lists.ubuntu.com is the place to raise questions or the #juju-dev channel here on freenode, but reporting bugs is always welcome - just please make sure you do a search first to see if what you're seeing is not already reported
<cargill> I mean a list of quirks to watch for in a "local" deployment
<dimitern> there's a list of known bugs - https://bugs.launchpad.net/juju-core/+bugs?field.tag=local-provider
<cargill> dimitern: thanks
<melmoth> juju people: what happen in a charm when a "juju destroy-unit" is called ?
<marcoceppi> melmoth: faily certain that the unit is marked as dying, then any relations that exist go through the -relation-departed/broken cycle, then stop hook is called, then it's removed from topology
<rick_h_> marcoceppi: I'm trying to run two local envs and I see it noting storage ports and such but I'm getting an error ERROR cannot use 37017 as state port, already in use
<rick_h_> when I try to bring up the second local env
<rick_h_> is there a config bit for the env.yaml for that? I don't see a reference in the docs
<marcoceppi> rick_h_: that's annoying, what version juju?
<rick_h_> 1.17.2-trusty-amd64
<marcoceppi> rick_h_: there's like three things you need to do, iirc, and they're not really well documented anywhere
<marcoceppi> thumper: what do you need to run multiple local envs?
<rick_h_> marcoceppi: ok, good to know it's not just me missing the boat then.
<rick_h_> marcoceppi: he'll be afk, I can check with him tonight. I need to get some instructions written up for demos for sales folks
<marcoceppi> rick_h_: yeah, let me go poke at it real quick see if I can't jog my memory
<marcoceppi> rick_h_: if you do, please pass those notes on to me so I can get them in the docs
<rick_h_> marcoceppi: k, appreciate it if you know, but no rush. I've got a couple of days to get things written out/tested
<rick_h_> marcoceppi: rgr
<stokachu> is there a decent application for testing websockets in the console?
<stokachu> like i want to test the connection to the juju api with json parameters
<marcoceppi> Does anyone know if you can deploy a subordinate to a subordinate
#juju 2014-02-13
<keshava> $JUJU_HOME/bin/juju bootstrap -v --debug --upload-tools would return 2014-02-13 00:35:41 ERROR juju.cmd supercommand.go:294 failed to enable bootstrap storage: failed to create storage dir: exit status 255 (Permission denied (publickey,password).)
<keshava> anyone knows reasons/solution for this ?
<keshava> ERROR juju.cmd supercommand.go:294 failed to enable bootstrap storage: failed to create storage dir: exit status 255 (Permission denied (publickey,password).) - how to workaround this?
<bradm> keshava: sounsd like you're using the wrong public key?
<stub> Can anyone point me to a charm which streams data (in my case, a log file) to another service? netcat would work but doesn't have the security I'd want.
<sarnold> stub: there are rsyslog charms
<sarnold> stub: http://manage.jujucharms.com/charms/precise/rsyslog
<stub> Ta.
 * stub wonders about serializing his logs and using syslog as the transport
<vila> marcoceppi: hi there, I'm trying to understand what https://pypi.python.org/pypi/amulet/1.2.1 is lp:amulet where there is no 1.2.1 tag, can you help ?
<stub> Gah, I've got logging backwards
<stub> My charm needs to 'require' syslog in order to emit log messages to a receiver that 'provides' syslog
<stub> I think the rsyslog charm was written from the POV of syslog being a service, when I'm thinking of it as a protocol
<stub> But I guess that way makes sense if we start running hooks in a better defined order, provider side hooks first
<marcoceppi> vila: amulet is mirrored on launchpad, but hosted on github
<marcoceppi> vila: https://github.com/marcoceppi/amulet/releases
<vila> marcoceppi: but https://github.com/marcoceppi/amulet.git/ doesn't have the 1.2.1 tag either ... what am I doing wrong >-/
<marcoceppi> vila: it's there, hard refresh?
<vila> marcoceppi: that made it, glitch since yesterday evening I suppose... Anyway,
<vila> marcoceppi: thsnks
<marcoceppi> vila: np
<vila> hehe, thanks (damn fingers)
<vila> marcoceppi: I'm also trying to find where I can get an ubuntu packaged version for that, I'm using ppa:juju/devel but amulet is not there (and I need support for precise, saucy, trusty, /me sighs, life is complicated sometimes... ;)
<marcoceppi> vila: tools are in juju:ppa/stable :)
<marcoceppi> and it's built for precise, saucy and trusty
<vila> \o/
<vila> marcoceppi: can I mix ppa/stable with juju 1.17.2 ?
<marcoceppi> vila: yes, so if you have devel ppa, you'll always get the lastest juju as juju 1.17.2 > 1.16.6
<marcoceppi> but when the next stable release comes out, 1.18.0 it'll get installed over 1.17.2
<marcoceppi> since we do devel and stable releases differently
<marcoceppi> evens are stable, odds are devel
<vila> marcoceppi: cool, thanks for the tip about juju/stable, I could find it on lp from lp:amulet :-/
<marcoceppi> as such, having both ppa won't break anything
<marcoceppi> vila: I'll update the project page, thanks for the feedback!
<vila> marcoceppi: oh, ok, good to know (I hope I will remember that odd/even in this context...)
<cargill> when joining the db-admin relation with postgres, the user does not get the right to create new databases?
<marcoceppi> cargill: uh, the user /should/
<cargill> when I do "SELECT rolcreatedb from pg_roles where rolname like 'db_admin%';", all are flase
<cargill> *false
<marcoceppi> cargill: give me a few mins to spin up the latest postgres charm and check
<cargill> sure, mine says 'cs:precise/postgresql-61'
<cargill> just in case it matters
<marcoceppi> cargill: that appears to be the latest
<cargill> ok, that's interesting, it might be the pg_hba.conf interaction with the local environment again, as trying to create it in psql works now
<cargill> and he does have rolsuper, that should give the user right to do anything?
<marcoceppi> cargill: yeah, db-admin gives you superuser permissions
<marcoceppi> so you should be able to romp around and create a database, etc
<cargill> marcoceppi: thanks, and sorry for the false alarm, it was caused by postgresql restoring my changes to pg_hba.conf after a while
<cargill> if doing deployment from local: and I've fixed bugs in the charm, how do I get them to be picked up? do I have to update revision? just removing the service and redeploying it again still uses the old version and there is nothing in .juju/charmcache
<marcoceppi> cargill: yeah, from local, run juju deploy -u --repository... local:...
<marcoceppi> the -u will update the cache
<rick_h_> marcoceppi: doc your way with working dual local lxc environments
<marcoceppi> rick_h_: \o/
<rick_h_> now that I've got this I want cross env relations :P
<cargill> marcoceppi: so the proper way is changing the revision, thanks
<marcoceppi> rick_h_: dont' we all :)
<cargill> when generating default passwords, is there a way to communicate them to the user?
<marcoceppi> cargill: no, we strongly recommend you not auto-generate passwords and have users set it via config.yaml
<marcoceppi> cargill: you can juju-log it, or place it in a file on the server and tell the user to ssh + cat that file though, again not recommended
<cargill> but what if they do not set it?
<marcoceppi> cargill: you can have the config-changed hook exit 1, to alert the user of an error, or have it exit 0 while it waits for the user to change/set the password
<jcastro_> marcoceppi, so at the sprint people were making fun of me because I had to keep blowing away .jenv files
<jcastro_> http://joshstrobl.blogspot.com/2014/02/developing-ubuntu-juju-charm.html
<jcastro_> I feel vindicated knowing I wasn't the only one with broken local provider
<JoshStrobl> Thanks jcastro_, I'll go ahead and sign up in the juju mailing list and take part in the discussion!
<marcoceppi> JoshStrobl: its not local either
<marcoceppi> err jcastro_^
<jcastro_> we should name this script thumper.sh: http://pastebin.com/UMeZdH0Y
<marcoceppi> jcastro_: ha!
<jcastro_> JoshStrobl, this is sweet feedback dude
<marcoceppi> jcastro_: HOLY CRAP DUDE
<lazyPower_> niceeee
<JoshStrobl> jcastro_: Just figured I'd share my experience writing a charm =)
<marcoceppi> jcastro_: please don't recommend that script
<marcoceppi> jcastro_: great way to loose your entire environments.yaml
<lazyPower_> its pretty destructive without any kind of warning
<JoshStrobl> But yea, it was a bit of a pain setting up the environment, destroying it, etc.
<JoshStrobl> marcoceppi and lazyPower_ that is the point
<lazyPower> JoshStrobl: oh i'm aware of that, but some people just run scripts blindly
<marcoceppi> JoshStrobl: some of us have more than one environment in environments.yaml :)
<lazyPower> ^
<JoshStrobl> I did it because whenever my charm failed to be installed properly and I went to kill the service, it would just put the state as "dying" the entire time, I couldn't kill it without destroying the environment.
<jcastro_> my point was mostly "we suck at cleanup so badly someone had to write a script."
<marcoceppi> jcastro_: yeah, I agree
<jcastro_> lazyPower, actually, weren't you writing a "nuke it from orbit" plugin?
 * marcoceppi makes a cleanup plugin
<lazyPower> jcastro_: i dubbed it "The Atom Bomb"
<timrc> Are there best practices for accessing private bzr branches from LP from a service node?  The most obvious strategy seems to be to create a bot user and copy around a private ssh key but that thought alone I suspect makes kittens cry
<JoshStrobl> I couldn't even use destroy-machine because the charm was stuck as dying.
<lazyPower> and yeah - Its still in my slack task listing
<marcoceppi> JoshStrobl: there's a force flag for that
<marcoceppi> JoshStrobl: juju terminate-machine --force #
<JoshStrobl> marcoceppi: I'd trying that out, but thanks to you being so helpful (bad marcoceppi, bad!), my charm works.
<JoshStrobl> So I'll need to intentionally break the charm to test if that works :P
<JoshStrobl> marcoceppi and jcastro_ I'll update my blog with the update you provided, provide some warnings about the script, and join the juju mailing list. My fiancee and I are going to a dance however, so I'll be afk!
<JoshStrobl> *with the update, as in the quickstart stuff.
<marcoceppi> JoshStrobl[afk]: enjoy!
<marcoceppi> jcastro_: here's pretty much all you need for a clean up: https://gist.github.com/marcoceppi/8977344
<lazyPower> marcoceppi: not sure about recent revisions of juju as i haven't had any issues, but with 1.17.0 i had to nuke ~/.juju/$env in the case of local as well. just removing the .jenv wasn't enough.
<marcoceppi> lazyPower: OHH, yeah, good point
<marcoceppi> lazyPower: https://gist.github.com/marcoceppi/8977344
<lazyPower> marcoceppi: and then there's the case where it wasn't destorying the lxc containers...
<marcoceppi> lazyPower: well, that's been fixed in 1.17
<lazyPower> you've basically just written the nuke plugin once you've done that. parse the env name out of the lxc-ls --fancy and bam.
<lazyPower> yeah, i agree, its no longer really required.
<lazyPower> someone was dilligent in getting that patched, and i send them all the <3 in the world for that.
<marcoceppi> lazyPower: probably thumper, at least that's who I always blame the lxc stuff on ;)
<marcoceppi> So, I'm about to make a juju-plugin plugin which downloads...plugins
<marcoceppi> much like npm, but for juju, because that seems the sanest way to package plugins
<lazyPower> +1
<lazyPower> Are you going to be hosting the API that powers this?
<lazyPower> or just parse launchpad data?
<marcoceppi> lazyPower: free for all
<marcoceppi> lazyPower: so you can host it on github, gist, lp, where ever
<marcoceppi> lazyPower: hosted
<lazyPower> brilliant
<marcoceppi> open source, etc
<marcoceppi> just going to have the plugin create a $JUJU_HOME/plugins directory, and have it added to the users path
<jcastro_> marcoceppi, do you know if we still support firing off a specific AMI for the AWS provider?
<jcastro_> we used to iirc.
<marcoceppi> jcastro_: I don't think we've supported that since the goland rewrite, let me check the source doe
<marcoceppi> code*
<marcoceppi> jcastro_: so you /could/ if you created simplestream data and overwrote the canonical stream data
<marcoceppi> then you could have it use another AMI
<mbruzek> marcoceppi, Are the machine logs for hp cloud stored on my local machine?
<marcoceppi> mbruzek: no, they're in the bootstrap node
<marcoceppi> mbruzek: the only reason machine logs are stored on your machine is if you're using the local provider because your laptop/desktop becomes the bootstrap node
<mbruzek> OK
<mbruzek> all-machines.log?
<tomixxx4> hi, when i try to deploy a service on a machine with "juju deploy mysql --to lxc:0" it says "container creation template for juju-machine-0-lxc-0 failed"
<mbruzek> in /var/log/juju/ ?
<tomixxx4> the output, of "juju status" in detail is: http://pastebin.ubuntu.com/6926247
<jcastro_> marcoceppi, excellent
<marcoceppi> jcastro_: haven't tested that, but it shoudl work
<marcoceppi> tomixxx4: which provider?
<jcastro_> ok so not impossible
<marcoceppi> mbruzek: machine-# is where the machine logs are
<marcoceppi> tomixxx4: nvm, I see you're on maas
<tomixxx4> marcoceppi: what do you mean "which provider" please have a look at the output of "juju status":  http://pastebin.ubuntu.com/6926247
<marcoceppi> tomixxx4: does this machine have outside network accecss?
<marcoceppi> machine being cloud1.master
<tomixxx4> marcoceppi: yes
<marcoceppi> tomixxx4: well, the error states "failed to get https://cloud-images.ubuntu.com/query/precise/server/released-dl.current.txt"
<tomixxx4> marcoceppi: that should be fixed with help from #maas because the nodes were able to resolve all packages while booting after i executed a bash script to do some NAT
<marcoceppi> tomixxx4: so, ssh in to that machine and try to wget that URL
<tomixxx4> so i should enter this command on the bootstrap node?
<marcoceppi> tomixxx4: yup
<tomixxx4> or any other url?
<marcoceppi> tomixxx4: that cloud-images url
<tomixxx4> it answers with "password:"
<marcoceppi> tomixxx4: that's...what?
<tomixxx4> i have to type this command on the console of the juju bootstrap node, right?
<tomixxx4> so, i typed this command in and the response of the console is "Password: "
<marcoceppi> tomixxx4: so, you should just be able to type juju ssh 0 from wherever you ran juju bootstrap
<marcoceppi> that should SSH you into the node
<marcoceppi> then you should just type `wget  https://cloud-images.ubuntu.com/query/precise/server/released-dl.current.txt`
<tomixxx4> aha
<tomixxx4> ok
<tomixxx4> because i have the physcial node beside me, so i typed this command with node's keyboard ;)
<marcoceppi> tomixxx4: right, and maas removes the passwords, so you can't just log in
<tomixxx4> ok, i have tried ssh now it prints "thomas@cloud1.master's password: " on maas-server console
<marcoceppi> tomixxx4: did you add your ssh key to maas?
<tomixxx4> yes
<tomixxx4> so i have to add sth to "ssh cloud1.master" ?
<marcoceppi> tomixxx4: well `juju ssh 0` should try to ssh ubuntu@cloud1.master
<marcoceppi> not sure why it says thomas@cloud1.master
<tomixxx4> it says master because i use DNS resolving
<tomixxx4> provided by maas server
<marcoceppi> tomixxx4: not sure why it says thomas instead of ubuntu
<tomixxx4> OK
<marcoceppi> it should be using the ubuntu user to ssh you in when you run `juju ssh 0`, could you run `juju ssh --show-log --debug 0` ?
<marcoceppi> and paste the output
<tomixxx4> kk
<tomixxx4> k, i tried "juju ssh 0" it says "permission denied (publickey, password)
<marcoceppi> tomixxx4: yeah, so it doesn't have your ssh keys on there
<tomixxx4> i have followed the installation guide... i have created an ssh key
<tomixxx4> andd added with "+ add ssh key" to preferences of root
<marcoceppi> tomixxx4: can you run `ssh -vvv ubuntu@cloud1.master` and pastebin the output
<marcoceppi> tomixxx4: and the ssh key is on the machine you're running the juju command from, correct?
<tomixxx4> yes, its on the maas-server-node
<marcoceppi> tomixxx4: cool, so you're running these commands from the maas-master
<tomixxx4> yes
<tomixxx4> output: http://pastebin.ubuntu.com/6926349
<tomixxx4> oh
<tomixxx4> this id_rsa.pub... is this the ssh key which should come into maas-dashboard?
<tomixxx4> because i have another ssh key created some time ago, and its file name is different and its location too...
<marcoceppi> tomixxx4: so, yeah, the ssh id_rsa.pub file that's on this machine in the thomas user should be the one in the dashboard
<tomixxx4> ok, s***
<marcoceppi> add it, juju destroy-environment maas, rebootstrap, then try to deploy --to lxc:0 again
<marcoceppi> tomixxx4: if it fails, again, then do the ssh and check to make sure you can get that url
<tomixxx4> ok
<marcoceppi> tomixxx4: no worries man! you would have hit this issue sooner or later ;)
<tomixxx4> iam such an idiot, if this is the cause of the issue, i have wasted lots of hours. xP
<tomixxx4> but Thank You so far !!!
<marcoceppi> tomixxx4: np, feel free to ping me when you get going again. I don't think that's why the lxc container failed, but it's hard to troubleshoot if you can't get to the node
<tomixxx4> marcoceppi: ok ty
<tomixxx4> marcoceppi: hmm, i have changed that ssh, destroyed juju and rebootstrapped but there is still sth wrong: http://pastebin.ubuntu.com/6926519
<tomixxx4> ok, last try for today, i have deleted all keys in .ssh and created a new one with the command "ssh-keygen -t rsa", rebootstraping now...
<tomixxx4> marcoceppi: ok, i could login now to the node and i tried wget but i get "failed: no route ho host"
<marcoceppi> tomixxx4: ah, I know this issue!
<marcoceppi> tomixxx4: you need to configure your DNS server to foward requests
<marcoceppi> quick fix, on the maas-master
<marcoceppi> tomixxx4: open /etc/bind/named.conf.options on maas-master
<tomixxx4> kk!
<marcoceppi> and add the following bit
<marcoceppi> http://paste.ubuntu.com/6926815/
<marcoceppi> tomixxx4: you should see it commented out currently
<marcoceppi> just uncomment and set the dns server to something, lik 8.8.8.8
<tomixxx4> marcoceppi: no its not uncommented, its 143.205.140.21
<tomixxx4> i told u from a bashscript to NAT right?
<tomixxx4> i guess the NAT script has caused to set this IP as DNS
<marcoceppi> tomixxx4: you're using the micro-cluster scripts?
<tomixxx4> gimme a sec plz
<tomixxx4> i have executed this script on maas-server: http://pastebin.ubuntu.com/6926827/
<tomixxx4> there is a line: dnsserver="143.205.140.21"
<marcoceppi> tomixxx4: yeah, that's from the micro-cluster scripts
<marcoceppi> tomixxx4: can you actually dig @ that address?
<marcoceppi> tomixxx4: ie, `dig @143.205.140.21 google.com` from the maas-master
<tomixxx4> yes, i guess, "query time: 1022 msec"
<tomixxx4> 1 server found
<tomixxx4> but a question: is the dns-server of the node not simply the maas-dns-server?
<tomixxx4> so 10.0.0.9 ?
<tomixxx4> because i have set "manage dhcp + dns"
<tomixxx4> instead of 143.205.140.21, which is dns-server from the maas-server
<marcoceppi> tomixxx4: right, so basically the nodes will use your maas-master as DNS, but that DNS server needs to forward requests that it dosn't know about to an outside DNS server
<marcoceppi> otherwise you'll get no route to host
<tomixxx4> hmm ok , what does this mean? do i have to edit resolvconf on ssh node and add 10.0.0.9 (maas-server) as dns-nameserver=
<tomixxx4> ?
<tomixxx4> aaaahhh 10.0.0.9 is already written in resolv.conf of the node
<tomixxx4> hmm
<tomixxx4> this is, what my "interfaces" file on the maas-server looks like: http://pastebin.ubuntu.com/6926866
<tomixxx4> dns-nameservers: 10.0.0.9 seems wrong, not?
<tomixxx4> (eth0 connects the server to the nodes, eth1 connects the server to the i-net)
<marcoceppi> tomixxx4: yeah, that dns-server should be the outward facing dns-server
<marcoceppi> dns-nameserver*
<tomixxx4> kk
<marcoceppi> tomixxx4: then you'll have to restart networking
<tomixxx4> k
<tomixxx4> hmm still no route to host
<tomixxx4> whe i start bootstrapping, i have to manually power on the node which is allocated in maas
<tomixxx4> so, when i power off the node, and power on it again, does juju still work on that node?
<tomixxx4> (i have also rebooted maas-server in meantime xP)
<marcoceppi> tomixxx4: it should, unless the node has been set to stopped in maas
<tomixxx4> kk
<tomixxx4> but now, i see on the nodes console "cloud1 login:" but it seems it is not finishing...
<tomixxx4> u know what i mean?
<tomixxx4> normally, after this line, the node goes on and at the end it prints "cloud init" or sth like that
<marcoceppi> tomixxx4: is this after a bootstrap or just a powercycle post bootstrap?
<tomixxx4> just powerxycle
<marcoceppi> tomixxx4: that's noraml
<tomixxx4> ok, but when i hit "juju status" i get no answer in maas-server...
<tomixxx4> it seems to stuck some way
<tomixxx4> i cant even login too
<marcoceppi> tomixxx4: so, you see cloudinit the first time because it's provisioning juju and installing a bunch of stuff, however, it's already done that so it just boots up and runs
<marcoceppi> tomixxx4: can you ssh ubuntu@cloud1.master ?
 * JoshStrobl thinks he may have been a bit overkill in the length of his response on the Juju mailing list :S
<tomixxx4> no it says "could not resolve hostname cloud1.master: Name or service not known"
 * marcoceppi goes to review the overkill
<marcoceppi> tomixxx4: so, can you run ssh -vvv ubuntu@cloud1.master and pastebin the output?
<tomixxx4> "interfaces" on master: http://pastebin.ubuntu.com/6927005
<tomixxx4> maybe i have to add "master" in search line?
<tomixxx4> ah...
<marcoceppi> JoshStrobl: fantastic feedback!
<JoshStrobl> Thanks
<marcoceppi> thank you!
<tomixxx4> it should spelled "dns-search" and "dns-nameservers" and not "search" and "nameserver" ... xP
<JoshStrobl> marcoceppi: In case you didn't get the notice on G+, I went ahead and updated the blog post to suggest your script as well. I described mine as an atom bomb to a problem that can be solved with a screwdriver :P
<marcoceppi> JoshStrobl: cool, I'm taking a few mins to work on making installing plugins for juju a lot easier too, as well as addressing some of the documentation concerns you brought up
<marcoceppi> Unfortuantely our copywriter is out, so things like quickstart which are relatively new haven't been documented yet
<JoshStrobl> marcoceppi: That's understandable. Better to know now then never I suppose! :P
<tomixxx4> marcoceppi: http://pastebin.ubuntu.com/6927050
<marcoceppi> tomixxx4: what does dig cloud1.master present?
<tomixxx4> marcoceppi: http://pastebin.ubuntu.com/6927064
<marcoceppi> tomixxx4: what does /etc/resolve.conf look like on maas-master?
<tomixxx4> marcoceppi: http://pastebin.ubuntu.com/6927078 and "interfaces": http://pastebin.ubuntu.com/6927082
<marcoceppi> tomixxx4: move 10.0.0.9 to the first line
<marcoceppi> also, which DNS server is the one you actually use?
<marcoceppi> 140.21 or 176.16 ?
<tomixxx4> dont know
<marcoceppi> whichever it is, that's the one that needs to be in /etc/bind/named.local.options
<marcoceppi> as it'll be the one that forwards requests, I'm guessing 143.205.140.21 considering the output of dig
<tomixxx4> ok its 143.205.140.21 , i did "dig www.google.com | grep SERVER"
<tomixxx4> can i simply edit resolv.conf?
<marcoceppi> tomixxx4: you can, but it'll eventually get re-generated
<tomixxx4> re-generated, but only after reboot?
<marcoceppi> tomixxx4: and other certain times
<tomixxx4> oh ok...
<marcoceppi> tomixxx4: the best way is to edit /etc/resolvconf/...
<marcoceppi> uh, I forget the file name, one second
<tomixxx4> head?
<marcoceppi> tomixxx4: yeah, /etc/resolvconf/resolv.conf.d/head
<marcoceppi> that's a good one to start with
<tomixxx4> and then reboot :-)
<tomixxx4> yep
<tomixxx4> working
<tomixxx4> can login again :-)
<tomixxx4> but no route to host :(
<marcoceppi> tomixxx4: yeah, so you'll need to update the forwarders rule in the /etc/bind/named.local.optoins to the right dns server
<marcoceppi> restart bind, then try again
<tomixxx4> good point, but the ip is already correct
<tomixxx4> 143.205.140.21
<marcoceppi> well, something's messed up along the line there
<tomixxx4> u meand named.conf.options?
<marcoceppi> tomixxx4: for the sake of, something or another, from maas-master run `dig @8.8.8.8 google.com`
<tomixxx4> btw, i have to run the bash script again it seems, because nat table iptables is empty ^^
<tomixxx4> i always have to run it, if i reboot the maas-server tbh
<tomixxx4> has sth to do with my old server version i guess
<tomixxx4> when i execute this bash script, do i have to reboot, restart network service or sth like that?
<marcoceppi> tomixxx4: nope, it should do all that for you
<tomixxx4> kk
<marcoceppi> tomixxx4: so, dig @8.8.8.8 google.com; did that work?
<tomixxx4> "dig @8.8.8.8 google.com" > connection timed out; no servers could be reached
<marcoceppi> darn, okay
<tomixxx4> ?
<marcoceppi> tomixxx4: well I was going to say you could just use another DNS server, but you can't because of your network location
<tomixxx4> but i can connect to i-net on my maas-server
<marcoceppi> tomixxx4: right, but some networks block exterior nameserver lookups
<marcoceppi> which is happening in your case
<tomixxx4> interesting: when i try "ping 143.205.140.21" on my node, it says "destination host unreachable"
<marcoceppi> tomixxx4: it might not be listening to pings
<tomixxx4> but "ping 10.0.0.9" works
<tomixxx4> and "ping 143.205.140.21" works also from maas-server
<marcoceppi> tomixxx4: right, because your bind instance isn't configured to not respond to pings
<marcoceppi> OH, on your node
<tomixxx4> yes, on the maas-server, i can ping 143.205.140.21 and 10.0.0.9
<tomixxx4> from the juju-bootstrap-node, i can only ping 10.0.0.9
<tomixxx4> so it seems there is some NAT problem?
<tomixxx4> in the bash-script, eth0 and eth1 are not confused or sth like that? iam not that familiar with iptables-statements
<tomixxx4> eth0 = connects the server to the nodes via switch, eth1 = connects the server with internet
<marcoceppi> tomixxx4: oh, maybe that's what's up
<marcoceppi> the script assumes something else
<tomixxx4> kk
<tomixxx4> so, i guess i need other NAT- and forwarding statements?
<marcoceppi> tomixxx4: one sec, link the script you're using again?
<tomixxx4> marcoceppi: kk, link is: http://pastebin.ubuntu.com/69227311
<tomixxx4> marcoceppi: sorry, wrong address.... correct: http://pastebin.ubuntu.com/6927311
<marcoceppi> tomixxx4: you should uncomment sysctl --system
<marcoceppi> also, you should install iptables-persist
<marcoceppi> and write the rules to it, so they survite restarts
<marcoceppi> so, install iptables-persist, then run the nat script, then run `iptables-save > /etc/iptables/rules.v4`
<tomixxx4> marcoceppi: but sysctl --system does not work with my ubuntu server edition
<marcoceppi> tomixxx4: what...version are you running?
<tomixxx4> ubuntu server 12.04.3 LTS
<thumper> rick_h_: if I have an environment running, with a gui installed, will 'juju quickstart' just log me in to it?
<rick_h_> thumper: yes
<thumper> ok
<rick_h_> thumper: it's a much faster way thank finding your admin secret
<rick_h_> thank/than
<thumper> true that
<marcoceppi> can we just make juju quickstart juju deploy?
<marcoceppi> err boostrap
<rick_h_> not a fan, in this case it's not bootstrapping. It's using the existing env
<rick_h_> and it's used for: juju quickstart bundle:xxxx
<rick_h_> which can reuse an env, that's not 'bootstrap' really.
<marcoceppi> rick_h_: it will bootstrap if the env doesn't exist already
<marcoceppi> atleast, I'm pretty sure it will
<tomixxx4> marcoceppi: so, the content of the bash script is ok?
<marcoceppi> tomixxx4: seems about right
<tomixxx4> marcoceppi: but dns resolving works
<tomixxx4> marcoceppi: see the output of the wget command: http://pastebin.ubuntu.com/6927432
<tomixxx4> marcoceppi: so, do u have any other idea what the reason for the problem could be ? :(
<tomixxx4> marcoceppi: ok, i have to go now. but i will be here next week, maybe there is a chance to get things work on another day ;)
<marcoceppi> tomixxx4: cheers, sorry we couldnt' get this figured out today!
<tomixxx4> marcoceppi: np! thank you for all your help and hints so far ;-) All these things helped me to get stepwise closer to openstack!
<tomixxx4> gn8
<marcoceppi> jcastro: https://github.com/juju/plugins
<jcastro> I saw, I am subbed to the juju org
<jcastro> <3
<marcoceppi> I'll add more as I clean up plugins
<marcoceppi> jcastro: could you test my install instructions when you get a chance?
<marcoceppi> lazyPower: you probably wrote a few plugins too ^^
<jcastro> sure
<jcastro> marcoceppi, git is in `git` now, not `git-core`
<lazyPower> jcastro: so git is no longer a metapackage?
<jcastro> no it's a package package
<jcastro> but it's git, not git-core
<jcastro> marcoceppi, the rest of the instructions work
<marcoceppi> jcastro: cool, thanks
<marcoceppi> docs updated
<jcastro> marcoceppi, http://paste.ubuntu.com/6927700/
<jcastro> something is wrong with the backup one I think?
<marcoceppi> jcastro: backup isn't a plugin in the plugin repo
<jcastro> marcoceppi, also it's Verify not Veify
<marcoceppi> jcastro: that's something in your path
<jcastro> oh!
<lazyPower> marcoceppi: the only juju plugin i wrote you replaced with clean
<lazyPower> +1 on instructions working
<jcastro> marcoceppi, all you need is an "updating" section and you should be good
<marcoceppi> jcastro: something I just realized, it'd be nice if in () after the description juju showed you the path to the plugin
<jcastro> just cd'ing in there and git pull should do it right?
<marcoceppi> jcastro: ah, good catch
<marcoceppi> jcastro: ack
<jcastro> o/
<marcoceppi> thumper: you up?
<thumper> marcoceppi: yeah man
<thumper> pfft
<thumper> been up for hours
<marcoceppi> thumper: hey, how hard would it be to display the path of a plugin in the juju help plugins output?
<marcoceppi> like `PLUGIN      --description output    PATH`
<jcastro> man, I really want the docs on github
<thumper> you mean where we found th eplugin?
<jcastro> but I don't want to go down that rabbit hole
<marcoceppi> jcastro: oh, you mean with like markdown ;)
<thumper> jcastro: do it :-)
<marcoceppi> +1 from me ;)
<jcastro> marcoceppi, yeah, then you could inline edit docs ON THE FLY
<thumper> jcastro: ask for forgiveness
<marcoceppi> thumper: yeah, where the plugin is located
<jcastro> thumper, yeah but we build the docs out of bzr, so like, we'd need to update cron jobs etc.
<marcoceppi> like /usr/bin or is it ~/.juju-plugins, etc
<thumper> marcoceppi: pretty easy
<jcastro> hmmmm
<marcoceppi> thumper: cool, I'll file a bug for it then try to hack it myself for the next 10 mins, then leave it to the experts
<jcastro> hey rick_h_, how did you import the gui history into github and so on?
<thumper> marcoceppi: heh, ok
<thumper> rick_h_: also, re: github, do you squash the commits when merging?
<jcastro> https://github.com/kfish/git-bzr
<jcastro> seems useful
<marcoceppi> jcastro: there's like a million git-bzr plugins out there
<jcastro> yeah I just want to know which one works
<marcoceppi> they all kind of suck, are you talking about just moving the docs branch to github?
<marcoceppi> I can do a one time export -> import to github
<marcoceppi> #thingstodowhennickisaway
<jcastro> yeah
<marcoceppi> was that yeah do that, or...?
<marcoceppi> I mean, we should make sure no other merges exist against the docs branch, then once we move it we'll need to basically not accept merges to the docs branch anymore on lp
<marcoceppi> then setup lp to import from gh so that we don't break auto-doc generation
<jcastro> I was thinking of filing to make the auto-doc scripts update to pull from gh instead
<jcastro> marcoceppi, I don't see any active MPs for docs.
<marcoceppi> jcastro: we could do that too, file to use gh, not sure how much IS will like that, but I can do an initial import to gh
<jcastro> -core is moving to github
<jcastro> let's go all in!
<marcoceppi> jcastro: cool, will set up in a few seconds
<rick_h_> thumper: notes, scripts, and docs in here: https://bazaar.launchpad.net/~rharding/juju-gui-lander/trunk/files
<rick_h_> thumper: hazmat says that to squash commits from bzr over to the new git repo it's a one line change to bzr-fastimport
<marcoceppi> jcastro: https://github.com/juju/docs
<jcastro> marcoceppi, nice, I'll send the mail to nick
<rick_h_> thumper: yes, we squash when marging currently. https://github.com/juju/juju-gui/blob/develop/HACKING.rst#typical-github-workflow is what we're doing atm with some warts
<lazyPower> marcoceppi: can i get an invite to the juju group?
<marcoceppi> I don't think I'm an admin?
<rick_h_> lazyPower: github username?
<lazyPower> rick_h_: chuckbutler
<marcoceppi> I do have the power
<rick_h_> lazyPower: added as charmer like marco/jcastro
<lazyPower> ty rick_h_
<rick_h_> hmm, and did owners as well. looks like everyone is an owner
<marcoceppi> lazyPower: we should make sure that we don't ever push master, but use merge proposals instead
<marcoceppi> psa
<lazyPower> marcoceppi: i'll follow the fork-repository pattern.
<marcoceppi> lazyPower: <3
<lazyPower> :)
<rick_h_> marcoceppi: lazyPower yea, we make everyone fork off of juju owned repo into their own space and use pull requests
<marcoceppi> I guess it's about time I moved amulet from my namespace
<hatch> there might be a better way to do the merging so we don't need people to rebase before merging into the develop branch....but untested
<hatch> pr's accepted.....right rick_h_ ? :)
<rick_h_> hatch: yea, hopefully I can hack at it some this weekend.
<hatch> oh that would be awesome
<rick_h_> hatch: well I missed the no rebasing bit
<hatch> there is git merge --squash
<rick_h_> some of those ways just always end up as one commit which I'm not a fan of
<hatch> and git merge --squash --ff
 * marcoceppi stops pushing master to plugins now
<rick_h_> hatch: oh hmm, well we just use the github api for the merge bit
<rick_h_> hatch: so I don't have a way to tell it how to do it like that. I guess we could have the script try to do it and then close the pull request as a follow up step
<rick_h_> I want to see how we get into these merge conflict issues. I don't run into them but seems everyone else does so I must be missing something in my workflow.
<hatch> I don't....it only happens when you merge develop into your current workflow then modify it and another branch lands
<hatch> it's pretty rare
<hatch> but the conflicts make sense
<rick_h_> I think the trick is to fix the way we merge develop in mid-branch
<rick_h_> just git merge develop isn't good. I think a git rebase develop will work
<rick_h_> but need to setup branches in the right state and try it out this weekend
<marcoceppi> rick_h_: so is juju-core on gh or should I be using bzr still?
<rick_h_> marcoceppi: I'm not up on what core is up to. They're moving at some point but don't know timing/state of that.
<rick_h_> marcoceppi: I'm assuming since thumper asked for info they're still working on it
<marcoceppi> makes sense, ta
<thumper> marcoceppi: core moving soon
<thumper> FSVO soon
<rick_h_> hah
<marcoceppi> jcastro: are you going to announce plugins to the list or should I?
<jcastro> I can if you'd like!
<jcastro> I am at your service
<marcoceppi> jcastro: you're more...wordsmity than I, I can't seem to get veify right ;)
<jcastro> on it
<marcoceppi> ta!
<jcastro> marcoceppi, is juju clean the only one we have right now?
<marcoceppi> it's the only one I've put in there, I'm trolling through my gists to find others
<jcastro> ack
<marcoceppi> I've created a ton of them, but they're pretty hyper specific
<jcastro> wouldn't hurt to have them in there
<jcastro> if anything as examples
<hatch> jcastro quickstart is a plugin..no?
<jcastro> hatch, yessir it is!
<hatch> or is that leveled up too much? :)
<jcastro> though tbh I think it should just be reimplemented in core
<jcastro> that should just be the way to use juju IMO
<marcoceppi> hatch: well, is juju-quickstart just a single file? IMO plugins should graduate to packages, like deployer charm-tools and quickstart
<marcoceppi> but a ton of them, like juju-clean, address gaps in juju released versions
<hatch> marcoceppi no it's quite a bit of work :)
<marcoceppi> or one off simplifications, like juju-setall
<marcoceppi> which just runs juju set key=val on all services wether it exists or not as a valid config
<hatch> https://code.launchpad.net/~juju-gui/juju-quickstart/trunk
<hatch> it seems like there should be a juju-plugins org and then each plugin has it's own repo
<hatch> else you have to pull down everything to patch one plugin
<hatch> but I mean, unless there are quite a few it's probably not a problem :)
<marcoceppi> hatch: that was my plan, but jorge was like "start small"
<hatch> I hate to say it.....but he is probably right ;)
<marcoceppi> yeah, I registered jujuplugins.com but unless get a cascade of plugins I think this will suffice
<hatch> yup yup
 * marcoceppi shames jcastro for cowboying on the docs branch
<dannf> he shalt be punished by eating only beans from a can and dark coffee for a week!
<thomi> Hi - I wonder who I should talk to about issues I'm having in juju-quickstart? Specifically: https://bugs.launchpad.net/juju-quickstart/+bug/1280005
<_mup_> Bug #1280005: juju-quickstart cannot bootstrap local environemtn with sudo <juju-quickstart:New> <https://launchpad.net/bugs/1280005>
<marcoceppi> thomi: as of 1.17.2 you no longer need sudo to bootstrap (it'll instead prompt for password) so when 1.18.0 is released that bug won't be relevant anymore
<thomi> marcoceppi: in the mean time, what can I do?
<marcoceppi> thomi: run sudo juju quickstart ?
 * thomi tries
<rick_h_> thomi: howdy, yea for now I cheat and bootstrap with juju and then run quickstart to get the Gui and bundles deployed
<thomi> rick_h_: ok, I'll try that
<rick_h_> thomi: we're in between releases. Right now it checks if the version is < 1.18 and if so it throws the message
<rick_h_> thomi: once 1.18 hits it'll 'just work' and it's a pain point during the dev 1.17 cycle
<thomi> rick_h_: perhaps on trusty it shouldn't install the PPA?
<thomi> rick_h_:  any idea when the 1.18 release is happening?
<rick_h_> thomi: I think the aim is within 2wk?
<thomi> awesome, thanks
<rick_h_> thomi: no promises, but soon so should get over this bump
<thomi> :)
<JoshStrobl> @marcoceppi - I don't think individual repositories are necessarily for juju plugins. I've contibuted to DefinitelyTyped (a repository of definitions and tests for Typescript stuff - https://github.com/borisyankov/DefinitelyTyped) and having each "definition" (in the case of Juju plugins it'd be a folder for each plugin) works out pretty well.
<marcoceppi> JoshStrobl: well, this is one repository, that has a file per plugin
<marcoceppi> same concept
<marcoceppi> JoshStrobl: I'm about to add more, I just put the one in there, but others are welcome
<thomi> rick_h_: probably a dumb question, but 'juju generate-config' does not set 'admin-secret' in the local environemnt config stanza, which then causes other things to fail. Any ideas what I'm missing?
<JoshStrobl> marcoceppi: Essentially, although you could easily divide each plugin into it's own folder and accompany each plugin with it's own Markdown page that'd be similar to man pages so people could easily understand (in a human readable fashion) the plugin's use-case, flags/ args, etc.
<marcoceppi> JoshStrobl: since plugins work by path, we'd have to add /each/ folder to path, which is tedious
<JoshStrobl> true
<marcoceppi> JoshStrobl: yeah, that's why I enforce all plugins accepting a --help flag
<marcoceppi> since juju will automatically run the -help flag if you do `juju help <plugin>`
<marcoceppi> much like the other juju help topics
<rick_h_> thomi: I think they changed that it's in the jenv file now. The gui mentions it when you install it and load it now
<JoshStrobl> marcoceppi: Awesome.
<rick_h_> thomi: so check the .juju/environments/xxxx.jenv
<thomi> rick_h_: yes, I see it there, but after doing a manual bootstrap, 'juju-quickstart -e local' complains about it being missing from the environments.yaml file - should I copy it over, or is there a better way?
<rick_h_> thomi: if you juju quickstart it should find it and load the gui and auto log you in
<rick_h_> thomi: juju-quickstart --version
<thomi> juju-quickstart 1.0.0
<rick_h_> hmm, can you file a bug then that lists the error you get from quickstart with it complaining. Note that you generated the config via generate-config
<rick_h_> thomi: ^
<thomi> rick_h_: sure thing
<rick_h_> thomi: for the moment yes, you can copy it over
<marcoceppi> lazyPower: you around?
<rick_h_> thomi: but I thought it should *just work* in either location
<marcoceppi> lazyPower: could you review and merge this plz https://github.com/juju/plugins/pull/2
<thomi> rick_h_: https://bugs.launchpad.net/juju-quickstart/+bug/1280019
<_mup_> Bug #1280019: juju-quickstart does not find admin-secret <juju-quickstart:New> <https://launchpad.net/bugs/1280019>
<rick_h_> thomi: thanks, will put this on the radar to check into
<rick_h_> thomi: I'm wondering if this is another case where "juju < 1.18 look in environments.yaml" and otherwise look in .jenc
<rick_h_> .jenv
<thomi> sounds like it
<rick_h_> thomi: I'll verify tomorrow and update the bug.
#juju 2014-02-14
<lazyPower> marcoceppi: you got it
<marcoceppi> lazyPower: I cowboy'd
<lazyPower> i see this
<lazyPower> i was baking a cake.. sorry. :P
<marcoceppi> omgcake
<marcoceppi> lazyPower: I do have another plugin for your review
<lazyPower> red velvet with cream cheese icing. My inner fat boy is happy
<lazyPower> my outer fatboy is cursing me
<marcoceppi> and I want to set up some travis testing to do like basic "Are all files executable, do they respond to --help and --description" etc
<marcoceppi> I'm not allowed to bake anymore, because I keep eating all the things I bake
<lazyPower> marcoceppi: ping me when you have the open MR, and i'll pull + review
<JoshStrobl> lazyPower: Hey I made a Juju plugin and did a pull request (https://github.com/juju/plugins/pull/3), mind reviewing it and merging it upstream?
<lazyPower> JoshStrobl: ack, on it
<JoshStrobl> lazyPower: Thanks
<JoshStrobl> Anyways, going to bed now. Talk to you guys later!
<marcoceppi> lazyPower: uh, it's not executable, probably shouldn't have been merged yet
<lazyPower> marcoceppi: i was going to fix it and push a patch
<lazyPower> no bueno?
<marcoceppi> ah
<lazyPower> i'm catching my repository back up, i cloned upstream on my laptop instead of my fork
<lazyPower> so, doing the shuffle :)
 * marcoceppi makes travis.yml file for linting
<marcoceppi> np np
<lazyPower> marcoceppi: open PR, i wont merge it :P
<marcoceppi> I'm still making the travis file. Trying to figure out how to test that all files are executable, and respond to --description and --help
<lazyPower> marcoceppi: you can have the travis.yml call a bash file right?
<marcoceppi> lazyPower: I think so
<marcoceppi> lazyPower: in the mean time
<marcoceppi> https://github.com/juju/plugins/pull/5
<marcoceppi> bah, I haven't added my prettyprint plugin yet
<lazyPower> yep
<lazyPower> i was about to say, its giving me an invalid option for prettyprint
<marcoceppi> lazyPower: yeah, let me add pprint
<marcoceppi> but also fix it so it doesn't suck
<lazyPower> well there goes my idea for a plugin :P
<lazyPower> i was going to parse juju status to condense it down to a more compact format
<lazyPower> marcoceppi: idea for the repository - an ansilliary script to enable/disable some of these plugins. As the repository grows this is going to get bloated
<marcoceppi> lazyPower: if the repo grows I'll build the plugin repository site
<marcoceppi> so you can just run `juju plugin install <plugin>`
<lazyPower> bueno
<marcoceppi> that would be a nice problem to have
 * marcoceppi already registered jujuplugins.com
<lazyPower> yeah, i recall all of that talk
<lazyPower> jorge and hatch talked you out of going full monty out the gate
<marcoceppi> yeah, I was all ready to go
<lazyPower> marcoceppi: brb, moving my quassel-core
<lazyPower_> sweet
<lazyPower_> that was painless
<lazyPower> marcoceppi: if you ever need to do that, stop the quassel-core daemon, and copy the .conf and .sqlite db (or pgsql if you're using that), dont copy the pem, it'll break your ssl connections.
<marcoceppi> lazyPower: cool
<jamespage> marcoceppi, mysql-server-5.6 is now in trusty btw
<cargill> how do you maintain the charmed software up-to-date? say a new point release is out, no changes to the charm need to be made and users want to upgrade, is there a hook you use to install updates?
<cargill> if the documentation says that "Internet access will be restricted from the testing host", what does that mean? does that mean that a charm trying to get the sources of the software might fail or that the test code cannot access the outside?
<marcoceppi> jamespage: awesomeee
<jamespage> marcoceppi, it needs some polish but its functional
<marcoceppi> cargill: that means that we also test how well the charm works offline, but most testing environments have no restrictions
<plars> I seem to be hitting something that looks very much like https://bugs.launchpad.net/juju-core/+bug/1275282 which is supposed to be fixed, but I'm on 1.17.2-0ubuntu3
<_mup_> Bug #1275282: juju bootstrap fails on HP Cloud unless tools are already uploaded <juju-core:New> <https://launchpad.net/bugs/1275282>
<plars> according to the bug, it was fixed in 1.17.1 though
<marcoceppi> plars: what version of juju are you running, `juju version`?
<plars> marcoceppi: 1.17.2-trusty-amd64
<marcoceppi> plars: are you abel to bootstrap? and if not could you pastebin `juju boostrap --debug --show-log`?
<plars> marcoceppi: no, bootstrap doesn't work. I have output from it but not with those options, let me do that... give me a moment
<plars> marcoceppi: ok, with some bits scrubbed for security and some for brevity: http://paste.ubuntu.com/6931844/
<marcoceppi> plars: do you have a custom image stream for this environment?
<plars> marcoceppi: no, I don't think so
<marcoceppi> it seems like it's failing to get https://streams.canonical.com/juju/tools/releases/juju-1.17.0-precise-amd64.tgz but that address is resolvable
<marcoceppi> I also have no idea why it's picking 1.17.0 instead of 1.17.2 tools
<marcoceppi> plars: if you open your environments.yaml, do you see anything with image-metatdata or tools-url for your hp cloud environment?
<plars> image-metadata-url: https://region-a.geo-1.objects.hpcloudsvc.com/v1/11289530460295/images
<plars> marcoceppi: yes
<marcoceppi> plars: ah, okay, comment that line out of your environments.yaml
<marcoceppi> plars: then `rm -f ~/.juju/environments/<name-of-hp-cloud-environment>.jenv`
<plars> marcoceppi: which I think I just got from a wiki, and I thought it was necessary to make it work
<plars> I'll try it without though
<marcoceppi> plars: then bootstrap again with --debug --show-log, etc
<marcoceppi> see if it works now
<marcoceppi> it was needed for 1.17.0, but since 1.17.1 streams are fixed
<plars> ah, ok
<plars> marcoceppi: I'm trying it now, but I already saw this fly by:
<plars> 2014-02-14 15:56:33 INFO juju.environs.bootstrap bootstrap.go:58 picked newest version: 1.17.0
<plars> marcoceppi: doesn't work at all without that line:
<plars> 2014-02-14 15:56:36 ERROR juju.cmd supercommand.go:294 cannot start bootstrap instance: index file has no data for cloud {region-a.geo-1 https://region-a.geo-1.identity.hpcloudsvc.com:35357/v2.0/} not found
<marcoceppi> sinzui: ^^
<sinzui> I have never seen that error, but I understand that HP cloud is very ill for new users and their new regions
<JoshStrobl> Wow, Ubuntu moving to systemd. This is exciting news.
<JoshStrobl> Well, at least improving support for it (according to Shuttleworth's blog). Anyways, back to work I go.
<plars> sinzui: any ideas how to get around it?
<plars> sinzui: I think when I previously tried bootstrapping and deploying things from saucy it worked, but now on trusty it does not
<sinzui> plars, which 1.17.2 and 1.16.6 are working fine in  region: az-3.region-a.geo-1 (looking at recent test runs)
<plars> sinzui: I *am* running 1.17.2
<plars> 1.17.2-0ubuntu3 specifically
<sinzui> plars, I just bootstrapped on Hp from trusty with that version
<sinzui> plars, so I suspect the region is the issue, and that does relate to what I have head about sick regions
<plars> sinzui: I just specify region-a.geo-1 in my environments.yaml, so I guess I need the az-3 there also?
<plars> nope
<plars> same error
<plars> ERROR bootstrap failed: cannot start bootstrap instance: index file has no data for cloud {region-a.geo-1 https://region-a.geo-1.identity.hpcloudsvc.com:35357/v2.0/} not found
<sinzui> plars, I definitely authenticated to that URL when using
<sinzui> region: az-3.region-a.geo-1
<sinzui> plars, are you a new HP Cloud user? as of this year?
<plars> sinzui: yes
<plars> sinzui: I've had it bootstrapped before when I was on saucy though
<plars> sinzui: and even had things deployed to it
<sinzui> plars, this is the config I use to bootstrap a few minutes ago: http://pastebin.ubuntu.com/
<plars> sinzui: that's just pastbin alone
<sinzui> plars, sorry,
<sinzui> plars, http://pastebin.ubuntu.com/6932350/
<plars> sinzui: ah, you have a tools-url
<sinzui> I do. that makes download of tools faster, but I don't think it matters for authentication
<sinzui> plars, for 1.17.x, the key was rename tools-metadata-url. I use the old name because I test on 1.16 too
<plars> sinzui: ok, but before I removed my image-metadata-url I was not getting this error
<plars> sinzui: now I put it back and I get yet a different one
<sinzui> hmm.
<plars> just out of curiosity, let me try again with 1.16.3-0ubuntu0.13.10.1 (saucy)
<plars> yes, bootstrap worked fine there (same config)
<sinzui> plars, what was image-metadata-url set too?
<plars> sinzui: https://region-a.geo-1.objects.hpcloudsvc.com/v1/11289530460295/images
<plars> sinzui: according to marcoceppi that used to be needed but not now
<plars> now even if I put it back, I get a totally different error on trusty
<sinzui> I am not sure the data there is sane....https://region-a.geo-1.objects.hpcloudsvc.com/v1/11289530460295/images/streams/v1/com.ubuntu.cloud:released:imagemetadata.json
<plars> ah, because of the region change
<plars> ok, changing my reation back to just region-a.geo-1, I am getting back to the original failure
<sinzui> plars, It has less info than http://cloud-images.ubuntu.com/releases/streams/v1/com.ubuntu.cloud:released:hpcloud.json
<sinzui> I wonder if  setting image-metadata-url: https://cloud-images.ubuntu.com/releases
<sinzui> works
<sinzui> plars, ^
<sinzui> looking at the debug log during boot I can see my image-metadata-url is implicitly  http://cloud-images.ubuntu.com/releases
<plars> sinzui: and with that url for image-metadata-url, I wind up right back at:
<plars> ERROR bootstrap failed: cannot start bootstrap instance: index file has no data for cloud {region-a.geo-1 https://region-a.geo-1.identity.hpcloudsvc.com:35357/v2.0/} not found
<plars> sinzui: I don't understand the internals of it, but I'm guessing there's some index file it's looking for there?
<sinzui> plar, that is the webservice that does authentication to uses hp-cloud
<sinzui> plars, file a bug at bugs.launchpad.net/juju-core . I need to pass this to developers, and most have reached their EoW
<sinzui> plars, they probably want you to "bootstrap --debug" to capture more details about what juju is doing
<viperZ28> any plans to make juju more PaaS aware?
<lazyPower> viperZ28: can you be a bit more specific in what you're asking?
<viperZ28> I was using local and I just started to look at this yesterday so I am newbie, but I was wondering once JuJu is setup any number of developers can have access to the same environment?
<marcoceppi> viperZ28: well, you can allow developers to access the same cloud deployment
<marcoceppi> viperZ28: juju is unique in that it's poth paas and not paas at the same time
<viperZ28> lazyPower: let me gather my thoughts on this and come back. I am currently using Stackato (CloudFoundry) so it is a shift in thinking. I agree about it being a PaaS and not a PaaS.
<viperZ28> I was amazed how easy it was to deploy a RabbitMQ clustered environment but I later had issues as some of the nodes crashed.
<marcoceppi> viperZ28: you can use juju to deploy cloud foundry :) but also deploy it's own apps. You can even use juju to deploy openstack on bare metal then deploy cloudfoundry on top of the openstack that was deployed by juju using juju
<marcoceppi> it's a very unique tool in the ecosytem of tools to use
<viperZ28> Are there plans for vsphere integration?
<viperZ28> and docker
<marcoceppi> viperZ28: we're working on docker integration
<chris38home_> Hi when will it possible to start to play with juju/trusty ?
<lazyPower> chris38home_: You can already use juju on trusty.
<chris38home_> yep, but there was no charms for trusty, in charm-gui ?
<chris38home_> in juju-gui
<chris38home_> no juju-gui indeed
<sarnold> chris38home_: the charms themselves may need some modification before they work on trusty
<sarnold> chris38home_: I believe you can easily copy them all into a repo of your own, change the targeted version, aim juju to your local repository, and start testing..
<marcoceppi> chris38home_: we've got an effort to add testing to all charms so we can make sure they work on trusty before promoting them
<sarnold> marcoceppi: nice :)
<marcoceppi> You'll probaby see a lot of charms land for trusty a few weeks leanding up to the trusty release
<chris38home_> nice
<hazmat> sinzui, is manual provider part of qa?
<sinzui> hazmat, no
<sinzui> may in the next two weeks though
<viperZ28> I have a few units that show "life: dying" since yesterday, anyway to force restart juju or the services?
<hazmat> sinzui, would be good, i'm seeing a few regressions on trunk and its being used for prod work fwiw
<hazmat> viperZ28, version?
<viperZ28> juju version
<viperZ28> 1.16.6-precise-amd64
<viperZ28> also all my instance show "instance-state: missing", I am using local
<lazyPower> viperZ28: there has been quite a bit of effort into the 1.17 series of juju, which is in ppa:juju/devel right now. I ran into some issues with local prior to the upgrade of -dev series.
<lazyPower> ^ in regards to the local provider.
<viperZ28> thanks, I will take a look. I had heard the code was forked at some point, one python and one go, is there any truth to this?
<lazyPower> That happened a while ago. Juju is mostly Golang now. I think it was the 1.15 series but dont quote me on that. I'll have to dig to find that version where it changed.
<rick_h_> lazyPower: 1.0
<rick_h_> I believe
<lazyPower> ah, so much further behind us than I was thinking
<lazyPower> good looking out rick_h_ *hat tip*
<marcoceppi> lazyPower: yeah, last python release was 0.7, first golang was 1.0
<marcoceppi> viperZ28: I think you can run juju terminate-machine --force <machine-number> to get rid of the dying but not dead yet
<viperZ28> marcoceppi: thanks
<viperZ28> how can I start all over? I want to switch to the dev branch
<marcoceppi> viperZ28: run sudo juju destroy-environment local
<viperZ28> thanks!
<marcoceppi> viperZ28: then sudo add-apt-repository ppa:juju/devel; sudo apt-get update; sudo apt-get install juju-core
<marcoceppi> then bootstrap and roll out
<viperZ28> https://www.irccloud.com/pastebin/SpHP3MwK
<marcoceppi> viperZ28: that means your destroy-environment didn't clean up very well
<marcoceppi> viperZ28: here are instructions on how to resolve that:
<marcoceppi> http://askubuntu.com/a/403619/41
<viperZ28> thanks again
<cjohnston> hazmat: ping
<viperZ28> where can I find the logs?
<marcoceppi> viperZ28: for local provider, they're in ~/.juju/local/log
<viperZ28> I tried to install juju-gui, which worked before
<viperZ28> 2014-02-14 21:18:33 WARNING juju.worker.instanceupdater updater.go:231 cannot get instance info for instance "": no instances found
<viperZ28> https://www.irccloud.com/pastebin/blkSC54r
<viperZ28> https://www.irccloud.com/pastebin/nSv5sJxD
<viperZ28> seems container is out of sync
<viperZ28> marcoceppi: need to add 'sudo rm -rf /var/lib/lxc' to the cleanup process (the link you gave me earlier) https://bugs.launchpad.net/juju-core/+bug/1227145
<_mup_> Bug #1227145: Juju isn't cleaning up destroyed LXC containers <cts-cloud-review> <local> <papercut> <juju-core:Fix Released by thumper> <https://launchpad.net/bugs/1227145>
<marcoceppi> viperZ28: it's just for 1.16, it's been fixed in 1.17
<viperZ28> juju version
<viperZ28> 1.17.2-precise-amd64
<viperZ28> those were probably still there from my initial 1.16 install
<marcoceppi> viperZ28: likely
<viperZ28> destroying "/var/lib/lxc" is not a good idea either :-) Directory has to be there
<chris38home_>  looks like juju (1.17.2) add-machine lxc has a bug, using br0 instead of lxcbr0 in /etc/lxc/auto/juju-machine-0-lxc-0.conf
<marcoceppi> chris38home_: you'll want to open a bug about that asap so it gets fixed by the next stable
<chris38home_> though /etc/lxc/default.conf has the good value (on a precise host)
<viperZ28> can someone help me understand relations?
<marcoceppi> viperZ28: sure, what doesn't make sense?
<viperZ28> I pushed a tomcat and a mysql charm, I would like to push an app to the tomcat instance and have it bound to the mysql
<viperZ28> is that possible?
<viperZ28> or do charms have to implicitly define a relation point?
<marcoceppi> viperZ28: charms have to explicitly define each relation/interface they connect to
<viperZ28> so if I can find/build a tomcat charm that defines a mysql relation then how would the credentials appear to the app?
<marcoceppi> viperZ28: so the mysql interface says that if you /require/ a relation with the mysql interface whatever service provides the mysql interface will send those credentials
<viperZ28> this pretty much answers my question http://manage.jujucharms.com/~robert-ayres/precise/tomcat
<chris38home_> marcoceppi : https://bugs.launchpad.net/juju-core/+bug/1280461
<marcoceppi> viperZ28: so in db-relation-changed you can run `relation-get` to get the various keys
<marcoceppi> chris38home_: ta, the QA team will triage and prompt you for any additional details
<viperZ28> so this is sort of the paas/not a paas. By default you don't get wiring for various services but it seems you can build those yourself to handle that.
<marcoceppi> viperZ28: well charms define what they can talk to
<marcoceppi> it's more like legos
<marcoceppi> so there are a few charms that provide the MySQL interface, there's a MySQL charm, a percona charm, etc
<viperZ28> how can I inspect a charm?
<marcoceppi> viperZ28: you can view it in the charm browser, or download it with charm tools
<marcoceppi> http://manage.jujucharms.com/~robert-ayres/precise/tomcat
<viperZ28> is it possible to view service settings via command line?
<marcoceppi> viperZ28: yes, juju get <service>
<viperZ28> marcoceppi: sweet!
<viperZ28> thanks
<viperZ28> it would be nice if it handled NATing
 * chris38home_ seeing that he can do juju add-machine kvm:0, is this curently working ?
<marcoceppi> chris38home_: eh, it probably works? Its a new feature
<marcoceppi> chris38home_: you can also do juju deploy --to kvm:0 (or lxc:0)
#juju 2014-02-15
<chris38home_> marcoceppi, don't seem on precise
<marcoceppi> chris38home_: It might not be available yet then
<chris38home_> is it a way in a bundle to say --to lxc:0 ?
<hazmat> cjohnston, pong
<imhotep> hello, how do I update code after I deploy with juju ?
<JoshStrobl> imhotep: Are you referring to Charms that you have created and wishing to update, charms you've deployed with Juju that aren't yours?
<marcoceppi> imhotep: which code, the charm or the software?
<JoshStrobl> marcoceppi: Lemme tell you, setting up vagrant and grabbing the Vagrant box for Ubuntu Server w/ Juju is sooooo much easier than manually setting up my own VM :D
<marcoceppi> JoshStrobl: yeah, that's why we pursued that in the first place, we really should promote it more
<JoshStrobl> marcoceppi: That said, out of curiousity, any reason why LXC is being promoted without using some nice abstraction layer like Docker (since with Docker you can have a Dockerfile that sets everything up as well)
<marcoceppi> JoshStrobl: well, because juju talks to LXC directly, there's no need to use an abstraction layer like docker
<marcoceppi> juju /is/ docker in this case, only docker + much more
<marcoceppi> and the charm is the dockerfile
<imhotep> JoshStrobl, marcoceppi: software. I believe charm will be frozen since it works (I know there is an upgrade-charm)
<marcoceppi> in very /very/ loose terms
<hazmat> imhotep, upgrade-charm
<imhotep> basically I want to trigger config-changed to force an update
<hazmat> imhotep, or for software installed by charm, it should ideally have a config option
<marcoceppi> imhotep: it depends on the charm, but some expose a version configuraiton option which allows you to move between versions of the deployed software
<JoshStrobl> marcoceppi: I suppose but until LXC is really the more promoted method, things are going to be virtualized using Vagrant + VirtualBox, which isn't as "bare-metal" as LXC
<hazmat> JoshStrobl, because juju and its lxc support pre-date docker by quite a while
<imhotep> ok it sounds like the one I am using doesn't have that ( I am using the nodejs one)
<marcoceppi> JoshStrobl: the vagrant box simply starts an Ubuntu machine with Juju, LXC, and the local provider enabled
<imhotep> it has a crontab option that updates code on frequent basis (if user sets it)
<JoshStrobl> marco: I see.
<marcoceppi> imhotep: ah, yeah, that doesn't really have a version config, you'll need to use the crontab config
<marcoceppi> JoshStrobl: it's the simplist solution to "how do I support LXC on Mac and Windows"
<imhotep> which means I can't really control right ? crontab would fetch latest code every n seconds/minutes/...
<marcoceppi> imhotep: yes, there is work being done on the nodejs charm to make it more responsive to users
<imhotep> ah cool
<JoshStrobl> marcoceppi: Of course, since Vagrant is multi-platform, unlike Docker (LXC) which is just now getting Mac support. Not that I have anything against Vagrant, seems to accomplish the same as Docker with some quirks (and Vagrantfile syntax is more complex).
<marcoceppi> Just like juju + LXC, Vagrant also pre-dates Docker ;)
<JoshStrobl> But yea, I was just curious. Figured it never hurts to ask questions, learn along the way and maybe use that knowledge in the future.
<marcoceppi> but yeah, we only use the Vagrant box to spin up an Ubuntu machine so you can use juju + LXC
<hazmat> JoshStrobl, where do you see docker getting mac support?
<JoshStrobl> uno momento
<JoshStrobl> http://docs.docker.io/en/latest/installation/mac/
<JoshStrobl> It's really recent
<JoshStrobl> http://blog.docker.io/2014/02/docker-0-8-quality-new-builder-features-btrfs-storage-osx-support/
<hazmat> JoshStrobl, that's just downloading a linux a vm to run lxc/docker in
<JoshStrobl> hazmat: Yes and Vagrant utilizes VirtualBox as well to run lxc, not much different.
<hazmat> JoshStrobl, yes.. there both using vagrant.. i though you meant some form of native support
<hazmat> docker notionally has mentioned they'd like to support other containers then lxc
<hazmat> er.. s/vagrant/virtualbox
<JoshStrobl> hazmat: Native support in Mac OS X for Linux tech? Haha, don't we all wish? :P
<hazmat> JoshStrobl, well.. solaris have zones, freebsd have jails.. osx has some support for chroots
<hazmat> and is based on freebsd userspace.. ie tools like jailkit
<hazmat> not quite the same though
<marcoceppi> hazmat: seems like they'd just have the "providers" as plugins like Vagrant, so you can say run against X container and just abstract an API for provider plugin
<cjohnston> hazmat: hey! question for ya.. IIRC you had recommended to us that for our automated testing we don't destroy the bootstrap node for speed.. we have been doing that, however after making a change to one of our charms we started to get errors with our testing.. it appeared as though we were using a cached charm for deployment or something.. if we killed the bootstrap node and then redeplyed the tests ran with no
<cjohnston> errors. is this expected?
<hazmat> cjohnston, hmm..
<hazmat> cjohnston, so your modifying the charm without incrementing the version?
<cjohnston> hazmat: possibly. I don't know if the version changed or not
<hazmat> cjohnston, yeah.. juju will internally cache the charm with the given revision
<hazmat> cjohnston, i filed a bug for this in the api fwiw https://bugs.launchpad.net/juju-core/+bug/1194880
<cjohnston> hazmat: so if we changed the charm version number it should have worked
<hazmat> cjohnston, yeah.. you can also deploy with -u which will do it for you
<cjohnston> cool. thanks
<marcoceppi> hazmat: I thought -u only worked on local?
<hazmat> marcoceppi, that's true, but how else would the charm change without a revision increment
<marcoceppi> hazmat: juju deploy --repository . local:<charm> and not bump the revision file?
<hazmat> marcoceppi, that's what -u is for
<marcoceppi> OH local charm, I thought it only worked on local provider
<marcoceppi> I got my locals mixed up
<hazmat> marcoceppi, no worries.. so digital ocean plugin is looking good.. if your up for testing.. one issue thats preventing release is the precise images there need some massage before use.
<marcoceppi> hazmat: I'm always down for testing
<hazmat> marcoceppi, oh.. it also needs a one-line patch against core to fix a bug in manual provider (https://bugs.launchpad.net/juju-core/+bug/1280432) .. latest is pushed to https://github.com/kapilt/juju-digitalocean
<marcoceppi> hazmat: I'll go ahead and compile juju then
<hazmat> marcoceppi, cool, thanks.. i'll try to plug away at the precise image issues. basically they need an apt-get update/upgrade cycle before juju talks to the machine.
<hazmat> i've been keeping away from the juju api client so far, but it would be the easiest way to resolve this.
<marcoceppi> hazmat: well, shouldn't the manual provider do that?
<hazmat> marcoceppi, no.. it tries not to perturb machines without cause.
<marcoceppi> ah
<hazmat> marcoceppi, the api for manual provisioning it has some knobs around package install (which i use for my fast lxc provider) but this one is needed a bit earlier in the install cycle.
<hazmat> mostly the providers rely on cloudinit behavior here
<lazyPower> hazmat: is the plugin currently available in pip?
<hazmat> lazyPower, not yet.. just registered it :-)
<lazyPower> I didn't think so :) just confirming. Following along at home through the readme
<hazmat> lazyPower, it works welll enough for 13.04/13.10 atm, but till it solid on precise there's not much point to pushing it
<hazmat> lazyPower, still in pre-release atm
<lazyPower> right. I'm a saucy user
<lazyPower> so i'll clone it and go from there
<hazmat> lazyPower, well its more about the instances then the client version
<hazmat> lazyPower, there aren't many charms for non-precise in the official charms.
<lazyPower> oh, you mean their images. Ok
<lazyPower> right
<lazyPower> well its easy enough to pull a precise image, change the series and do a local charm deployment
<hazmat> lazyPower, for a power user like yourself :-) sure... but goal is end users.
<lazyPower> thats, quite possibly the nicest thing anyone has ever said tome
<lazyPower> marcoceppi: buy this guy a beer for me.
<hazmat> marcoceppi, they have this awesome beer on tap @ lost dogs.. abraxis :-)
<marcoceppi> hazmat: I guess we'll have to go drink up again soon
<lazyPower> We should do a drinkup soonish
<hazmat> marcoceppi, lazyPower if you have DO support accounts, i'd appreciate an upvote on http://digitalocean.uservoice.com/forums/136585-digitalocean/suggestions/5524786-update-ubuntu-12-04-3-images-to-12-0-4-4
<marcoceppi> you have my votes
<lazyPower> hazmat: all 3 of mine just joined the choir
<hazmat> marcoceppi, lazyPower precise issue fixed in client provider.. and behold.. wordpress charm running on DO http://192.241.155.235/
<lazyPower> nice!
<marcoceppi> huzzah
<hazmat> pushed latest, and taking a break
<marcoceppi> lazyPower: you going to be around in about 10 mins? I think I found the issue with mysql need someone to test
<lazyPower> yep
<hazmat> marcoceppi, lazyPower confirmed btw that the do provider should work with the dev ppa version of juju 1.17.2 .. but needs --upload-tools on juju docean bootstrap
<lazyPower> hazmat: i'll be on my desktop rig in about 20 minutes and i'll try my hand at bootstrapping
<marcoceppi> hazmat: sweet, I'll give it a go after getting the mysql charm patch up
<lazyPower> marcoceppi: you pushing those changes to GH? LP is going offline in 10 minutes.
<marcoceppi> lazyPower: already pushed
<lazyPower> branching now
<marcoceppi> lazyPower: https://code.launchpad.net/~marcoceppi/charms/precise/mysql/actually-start-mysql/+merge/206597
<marcoceppi> hazmat: I get a missing api creds even though i have a ~/.juju/docean.conf
<hazmat> marcoceppi, updated the readme.. only env vars at the moment
<marcoceppi> oh, hah, cool
<hazmat> also dropped a release on pypi so the readme should work..
<marcoceppi> hazmat: huh, still not working
<hazmat> marcoceppi, pastebin?
<marcoceppi> hazmat: sure, what would you like?
<hazmat> marcoceppi, just command and output
<marcoceppi> hazmat: http://paste.ubuntu.com/6939820/
<hazmat> marcoceppi, hmm.. and values look sane if you do $ env | grep DO_
<hazmat> the DO_CLIENT_ID should be the short string
<marcoceppi> hazmat: apparently just sourcing the conf wasn't enough, I had to export each line
<marcoceppi> now I get this http://paste.ubuntu.com/6939829/
<marcoceppi> did a git pull and install for latest
<hazmat> marcoceppi, hmm.. argh.. that's probably a delta from the old dop client and the current api it has in github trunk.
<hazmat> marcoceppi, i'm going to resolve i think my just having a separate do client internal to the plugin, but for now.. resolution is grab a copy of https://github.com/ahmontero/dop and run python setup.py develop in it
<hazmat> or python setup.py install
<hazmat> or maybe i should switch out to the other python digital ocean client lib
<hazmat> fwiw here's some relative timing and usage for bootstrap + add-machine http://pastebin.ubuntu.com/6939864/
<lazyPower> marcoceppi: success on the mysql charm in HP, checking local and amazon now
<marcoceppi> hazmat: bootstrap still fails even with latest dop
<marcoceppi> using what's in the git repo
<hazmat> marcoceppi, same error?
<marcoceppi> hazmat: yeah
 * marcoceppi digs a bit
<hazmat> marcoceppi, you may have to kill the other dop that got installed to /usr/local/lib/python2.7/dist-packages/
<hazmat> marcoceppi, surprised your not using virtualenv..
<marcoceppi> hazmat: I still haven't gotten used to virtual env
<marcoceppi> I just setup.py install stuff, then blow away /usr/local/... when I get tired of it
<marcoceppi> though I might try venv next
<hazmat> marcoceppi, aha.. sorry i see the issue
<hazmat> marcoceppi, unset DO_SSH_KEY
<marcoceppi> hazmat: ah, I still have to switch to null provider?
<marcoceppi> errrr manual provider
<hazmat> marcoceppi, yes.. whatever env your pointing to must be a null/manual provider
<marcoceppi> hazmat: so what should the environments.yaml stanza look like?
<hazmat> marcoceppi, see readme
<marcoceppi> nvm, found the readme
<marcoceppi> okay, moving now
<hazmat> marcoceppi, cool.. also -v gives quite a bit more  output then the default.. which is terse
<marcoceppi> hazmat: good to know
<marcoceppi> hazmat: if bootstrap fails, it should capture that failure and destroy the instance it spun up
<hazmat> marcoceppi, noted. also if bootstrap fails. you have to manually kill a directory.. boot-$envname in $JUJU_HOME
 * hazmat files bugs
<marcoceppi> hazmat: yeah, just found that :)
<marcoceppi> hazmat: you tracking bugs on gh? I can start putting those there as I find them
<hazmat> marcoceppi, yup
<hazmat> marcoceppi, https://github.com/kapilt/juju-digitalocean/issues/
<lazyPower> marcoceppi: as soon as launchpad comes back i'll +1 approve the MP. Works on all the providers I have access to.
<lazyPower> great work
<stokachu> has there been any interest in perl bindings for juju?
<marcoceppi> lazyPower: are you verifying that /usr/bin/mysqld is running on each stand up?
<lazyPower> i am
<lazyPower> marcoceppi: standup, validate service is active, add relational charm mediawiki and wordpress
<hazmat> stokachu, not really.. but if scratches an itch i'd say go for it.. although.. the ideal client technically would be callback or poe based for perl.
<lazyPower> all of the above completed without issue
<marcoceppi> lazyPower: oh, nice
<marcoceppi> \o/
<stokachu> hazmat: ive got a non-blocking library i started
<hazmat> stokachu, not idiomatic though.. the websocket is async bidi..
<hazmat> stokachu, cool
<marcoceppi> I mean, it's not really a fix more than it is a huge bandaid until the re-write
<stokachu> hazmat: https://github.com/battlemidget/perl-juju
<stokachu> im basically using your api calls as starting point
<lazyPower> stokachu: epic github handle
<stokachu> but its async
<stokachu> lazyPower: haha thank you :)
<lazyPower> marcoceppi: I feel like I should go back and add percona tests to some of the charms i've written amulet tests for after our standup on Friday
<stokachu> hazmat: it uses anyevent which can interface with poe, ev etc
<hazmat> stokachu, that still looks a bit fishy re the async bits
<marcoceppi> lazyPower: really, the mysql charm needs to make sure percona supplies the exact things the interface for mysql implements
<hazmat> stokachu, your doing sync on top of async loop expecting sync usage
<marcoceppi> lazyPower: the burden is on the MySQL charm, not on other charms
<hazmat> stokachu, afaics.. imagine do 5 calls in a row, and getting responses back in different order
<stokachu> hazmat: thats what the $done = AnyEvent->condvar handles
<stokachu> https://github.com/battlemidget/perl-juju/blob/master/lib/Juju/RPC.pm#L83-L94
<hazmat> stokachu, yeah.. looking at it.. my perl .. is rusty :-) i generally pull out the camel book
<stokachu> but its still in early stages so i could be missing soemthing
<stokachu> so basically condvar handles each session separately
<stokachu> so even though the calls may be out of order the data responds properly
<stokachu> so far anyway
<hazmat> stokachu, cool
<stokachu> ive still got a lot to learn on event programming apparently
<stokachu> hazmat: also understand the hybi-13 now
<hazmat> stokachu, its why goroutines win
<stokachu> hazmat: yea go does it right
<stokachu> so much easier
<hazmat> python gevent does okay.. but baked into the language makes things much nicer
<stokachu> agreed
<hazmat> s/gevent/greenle
<stokachu> i like python tornado
<stokachu> at least for understanding python events
<hazmat> stokachu, also nice, but its callback with syntax sugar around yield as coroutine.. py3.4 asyncio is much the same.. tornado is nice though
<hazmat> gui uses it to proxy intercept augment juju-api
<stokachu> ah nice
<stokachu> hazmat: do you think ill get lashes if i blog out the perl bindings at some point
<stokachu> maybe when it reaches 1.0
<hazmat> stokachu, lashes.. no way. code/results == good.. osource .. early and often :-)
<stokachu> hazmat: hah ok cool
<hazmat> stokachu, to quote gkh's response back to me on debating/patching kdbus.. "That's why the Linux kernel mailing list very rarely has "idea" discussions, real patches are what matters."
<hazmat> s/patches/source
<hazmat> for the same effect
<stokachu> sweet
<stokachu> i like that philosophy
<hazmat> stokachu, so hyb13 vs rfc?... delta == ?
<stokachu> hazmat: so i think hyb13 == rfc6455
<stokachu> i was just confused on those versionings
<hazmat> stokachu, yeah.. that's basically what i got out of it.. delta being grammatical/syntax on the way to publication afaics.
<stokachu> the websocket library from go.net is rf6455/hybi13 compliant
<stokachu> yea i read the diff between them
<stokachu> and it was just grammatical stuff
<hazmat> marcoceppi, any progress?
<stokachu> what i dont get is how to check out what version of the libraries are used within juju
<stokachu> it just points to p.google.com/go.net/websocket for example
<hazmat> stokachu, godeps are a state of sadness.. juju-core implements its own dep rev mechanism
<hazmat> its in dependencies.tsv in root dir of checkout of juju-core
<stokachu> ah ok that helps
<lazyPower> marcoceppi: http://blog.dasroot.net/juju-plugins-ahoy/
<hazmat> lazyPower, so.. how do plugins install in that model.. if they have deps?
<lazyPower> hazmat: we haven't gotten that far with the plan
<lazyPower> the idea originally was to build an API service that works similar to npm that fetches the plugin, and the plugin registers the dependency chain
<lazyPower> but with only 5 plugins to start, we haven't exactly got a need for something that complex
<hazmat> lazyPower, there are dozens of plugins..
<lazyPower> not in the juju/plugins repository
<hazmat> lazyPower, then its not really the plugins repository is it
<lazyPower> depends on your definition of a repository
<lazyPower> One of the ideas I heard was to change it to an organization, and give each plugin their own repository, that way you could warehouse all the dependencies together. But the juju-plugin-plugin aims to solve some of those concerns.
<hazmat> well a git repo by itself is just a name grab.. the need for more exists, the question is discovery.. minus the npm (cross-language arbitrary install) .. the easiest thing is to just document location via docs page on plugins imo.
<lazyPower> thats cumbersome to update, and invalidate old plugins that are no longer useful as features get expanded in core
<lazyPower> i disagree with it being "the easiest"
<lazyPower> in the first run, yes. that would be insanely easy. When you have 50+ plugins - nope.
<hazmat> lazyPower, the alternative you proposed was documenting them in that plugin repo.. which amounts to the same issue.
<lazyPower> true, but this is step 1
<marcoceppi> hazmat: if you plugins has deps, package it up like quickstart
<hazmat> marcoceppi, again discovery is the question
<marcoceppi> hazmat: I forsee having plugins in their own github repos soon, and a central registry which will likely have a way to know "how to install plugin"
<marcoceppi> hazmat: yay, I deployed wordpress too
<stokachu> is there a system path that searches for plugins or just ~/.juju-plugins?
<hazmat> marcoceppi, YIPEE :-)
<hazmat> stokachu, PATH for juju- prefix commands
<marcoceppi> stokachu: plugins work as long as they're in PATH
<hazmat> stokachu, juju help plugins
<marcoceppi> this just adds ~/.juju-plugins to path
<stokachu> ah ok reading now
<hazmat> stokachu, only real req on them is --description return one liner .. the rest is per the executable logic.
<stokachu> gotcha
<marcoceppi> stokachu: and have a --help flag
<hazmat> juj
<hazmat> oops
<stokachu> marcoceppi: ok
<hazmat> marcoceppi, really that's a  best practice/suggestion.. not a req afaics
<marcoceppi> hazmat: juju help <plugin> invokes it
<hazmat> marcoceppi, cool, something new everyday :-)
<hazmat> marcoceppi, re destroy error, you can rerun
<hazmat> marcoceppi, i need to introduce a sleep or auto-retry there
<hazmat> marcoceppi, did you ever find that biz card?
<marcoceppi> hazmat: yeah, it's the one you've got
<hazmat> ack.. bummer.. oh well hopefully more response with working code or via twitter
<marcoceppi> hazmat: yeah, could berate over social media
<marcoceppi> lazyPower: thanks, merged and should be in the store soon
<lazyPower> awesome, now that i know LP is back online, just commited my approval review
<lazyPower> Oh.. it stored it when they were in maintenance mode. thats nice to know it queues
#juju 2014-02-16
<chris38home_> hi, will juju "upgrade-juju" upgrade all running jujud in my environnment ?
<marcoceppi> chris38home_: yes with some caveats
<chris38home_> it's quite unclear where the jujud come from, as there is a repository and local version and what does --upload-tools
<hazmat> chris38home_, yeah.. i filed a bug for dry-run mode just to figure out version selection there.
<hazmat> chris38home_, so --upload-tools is the most deterministic form wrt to versions atm, else it goes through a tools lookup (local env storage, provider env storage, and global simple streams data)
<hazmat> chris38home_, --upload-tools amounts to use my client side binary of jujud as the new version
<hazmat> chris38home_, fwiw the bug on dry-run for knowing version is http://pad.lv/1272544
<marcoceppi> chris38home_: there's a URL online which contains a tar.gz  of all the releases of jujud for all architechtures, that's where it comes from. It doesn't exist in any debian package, etc
<marcoceppi> well, it does, but that's not how it's installed on the servers
<hazmat> anyone know where the juju logos are at ?
<marcoceppi> hazmat: I found a few a long time ago somewhere
<hazmat> marcoceppi, yeah.. me too.. just can't find them anymore..
<marcoceppi> hazmat: http://design.ubuntu.com/downloads?search=juju&submit=
<hazmat> marcoceppi, awesome, thanks
#juju 2015-02-09
<SimplySeth> how does one define a custom interface in metadata.yaml ? I see interfaces like mysql and http, but don't know where these are defined.
<lazyPower> SimplySeth: interfaces are kind of arbitrary - think of them as loosely coupled contracts
<lazyPower> i gave a talk over this, let me find the slides
<lazyPower> https://speakerdeck.com/chuckbutler/service-orchestration-with-juju?slide=24
<lazyPower> SimplySeth: an important thing to remember about your interfaces as you start to design your relationship models - its a bi-directional communication cycle between the two services, you can have any number of different relations that consume the same interface but behave differently, and if you send for example - 3 data points on one of the relationship over that interface, every time the interface is used the same 3 data poitns should be
<lazyPower> exchanged.
<lazyPower> if you're changing teh data handoff, the interface should change.
<lazyPower> ergo: you'll se interfaces like db, and db-admin
<SimplySeth> * goes to read
<SimplySeth> mesos communicates on 5051 and 5050 and 2181
<lazyPower> that very well could be modeled as 3 independent relationships
<lazyPower> or, if all of those have the same concern, you can encapsulate that as a single relationship - i'm not really familiar with how mesos talks to its minions
<lazyPower> what i typically do when i'm speccing out a charm - is i break down the concerns, get a mental data model on paper and start to sketch how that looks between services - pencil and paper are good tools for this - or a whiteboard if you ahve one
<lazyPower> once you've got a decomposed service diagram, you then name them, and define your data exchange between the services, and have a good representation to stuff in your metadata.yaml
<marcoceppi> bodie_: re: iptables, you could probably achieve a similar affect with an iptables subordinate
<Muntaner> good morning to everyone!
<Muntaner> having problems with bootstrap... can anyone help me?
<Muntaner> I'm trying to use juju with an openstack in a private cloud
<Muntaner> when I try to bootstrap
<Muntaner> Juju successfully creates the VM on the openstack
<Muntaner> seems to be able to communicate with this instance
<Muntaner> but... at a certain point, bootstrap fails
<Muntaner> with this log:
<Muntaner> http://paste.ubuntu.com/10139228/
<Muntaner> hi dimitern
<dimitern> hi Muntaner
<Muntaner> I'm having some bootstrap problems... may you help me?
<dimitern> Muntaner, can you remind me - were you having issues behind a proxy?
<Muntaner> no dimitern no proxies issues :)
<Muntaner> well, simply I can't bootstrap juju over a private OpenStack cloud
<Muntaner> logs: http://paste.ubuntu.com/10139228/
<dimitern> ah ok
<dimitern> Muntaner, it seems you can't fetch the tools
<Muntaner> dimitern: need environments.yawl?
<dimitern> Muntaner, nope, let me check something first
<dimitern> Muntaner, have you tried running juju sync-tools before bootstrap?
<Muntaner> I'll try now
<dimitern> Muntaner, wait a sec
<Muntaner> ok dimitern, tell me what to do :)
<jam> hey dimitern, aren't you off today?
<dimitern> Muntaner, try following this guide https://juju.ubuntu.com/docs/howto-privatecloud.html
<dimitern> jam, hey, yes - officially, just checking how things are going
<Muntaner> dimitern: maybe I need to create this metadata on the server?
<Muntaner> 'cos actually, I'm trying to use juju with my laptop on the openstack server via LAN (10.0.0.0/24)
<dimitern> Muntaner, yes, it seems bootstrapping finds the correct image to start, but the juju tools metadata is missing
<Muntaner> aw ok dimitern, I can't get the workaround btw :(
<dimitern> Muntaner, so try juju metadata generate-tools -d $HOME/juju-tools (create that dir as well beforehand); then juju bootstrap --metadata-source $HOME/juju-tools
<Muntaner> dimitern: all of this in my laptop, right?
<Muntaner> dimitern: having some strange debug messages in this command, want to check this?
<Muntaner> mike@mike-PC:~$ juju metadata generate-tools -d /home/mike/juju-tools --debug
<Muntaner> -> http://paste.ubuntu.com/10139486/
<Muntaner> but, in the folder, it created some jsons
<Muntaner> com.ubuntu.juju:released:tools.json, index.json, index2.json
<dimitern> Muntaner, that's fine, try bootstrapping now with --metadata-source
<Muntaner> dimitern: I get an earlier error :(
<Muntaner> http://paste.ubuntu.com/10139519/
<Muntaner> maybe... I need to upload tools?
<dimitern> Muntaner, no, just --metadata-source
<dimitern> Muntaner, ha!, ok
<Muntaner> dimitern: what should I try now?
<dimitern> Muntaner, you'll need juju metadata generate-image -d <same dir> -i <image id from your openstack glance image list>
<Muntaner> ok, gonna do that and give feedback
<dimitern> Muntaner, you can run this multiple times - one per image (e.g. trusty amd64, i386, etc.); then run juju metadata validate-images (check that page I sent and the commands --help)
<dimitern> Muntaner, the idea is to have a bunch of images and tools metadata in that metadata dir so that at bootstrap juju can find you images and tools
<Muntaner> dimitern: all of this should happen on my laptop, right?
<Muntaner> dimitern: got a new error :)
<Muntaner> http://paste.ubuntu.com/10139639/
<dimitern> Muntaner, did you try validate-images and validate-tools successfully before trying to bootstrap? please do that
<Muntaner> dimitern: I did validate images - not validate tools, will try it now
<dimitern> Muntaner, yeah, if both are successful you've a better chance of bootstrapping ok
<Muntaner> dimitern: same errors
<dimitern> Muntaner, with validate-tools?
<Muntaner> seems like juju can't find /home/mike/juju-tools/tools/releases/juju-1.21.1-trusty-amd64.tgz
<Muntaner> in fact, I don't have that file
<dimitern> Muntaner, hmm ok, then let me ask someone what we're missing here
<Muntaner> yep - with both validate-tools and validate-image
<Muntaner> ok dimitern, thanks
<Mmike> Hi, guys and gals.
<Mmike> How do I check which charm version has deployed my services/units? Is that info displayed in 'juju status' ?
<jamespage> Mmike, it should be yes
<Mmike> jamespage: found this: http://juju-docs.readthedocs.org/en/latest/internals/charm-store.html#charm-revisions
<Mmike> but not sure how to link charm revisions with bzr revisions
<jamespage> Mmike, you can't
<jamespage> Mmike, charmstore revisions are created when the bzr source branch is automatically imported
<jamespage> cs:trusty/swift-storage-92 for example
<jamespage> but there is not direct correlation with bzr versions
<jamespage> its an intentional decoupling to avoid tying charm authors to a single vcs
<hazmat> those docs should get taken down, they are the ancient pyjuju impl docs
<Mmike> hazmat: heh... but they reveal some of the internals
<Mmike> which is a good thing
<hazmat> Mmike: we have internal docs for the current version in the source tree as well
<hazmat> covering different topics.. but also useful for lighting up internals.. https://github.com/juju/juju/tree/master/doc
<hazmat> the old docs are are hit or miss, especially the internals which may no longer apply
<hazmat> actually almost none of those docs on internal apply anymore
<hazmat> from juju-docs.readthedocs.org site
<Mmike> hazmat: how can I get the internal docs, are they on launchpad somewhere?
<Mmike> actually what I need is to test upgrade path for some charm - so I see the bzr commit that was merged into 'trunk' (main?) and I'd like to deply that particular 'commit' - how do I figure out what charm revision that was?
<marcoceppi> Mmike: there's no correlation to bzr commit and charm revision. You could probably figure that out with some API queries but it's not apparent
<marcoceppi> hazmat: I agree, any idea on who setup that site?
<Mmike> marcoceppi: ack, thnx. I'll try to figure it out manually.
<hazmat> marcoceppi: not sure, maybe brendan?
<hazmat> er.. i mean brandon
<hazmat> Mmike: that github link above has the current internal docs
<Mmike> hazmat: missed that one, thnx! :D
<hazmat> Mmike: nutshell you can use query the bzr version of a particular charm store revision, but there's no mapping from bzr rev to charm store rev. effectively the charm store rev, is a monotonically increase integer managed by the store, in response to pushes/puts of a charm.
<hazmat> ie. something like https://api.jujucharms.com/v4/mongodb/meta/extra-info
<Mmike> i see
<Mmike> thnx, lads
<hazmat> Mmike: actually this is a bit better https://api.jujucharms.com/v4/mongodb/meta/revision-info
<hazmat> rick_h_: are there public docs on the store api?
<hazmat> Mmike: fwiw, docs on the store api https://github.com/juju/charmstore/blob/v4/docs/API.md
<urulama> hazmat: rick_h_ is out today, swap days. But yes, that are the store API.
<hazmat> urulama: danke
<urulama> Mmike: there is a /meta/extra-info endpoint in CS that holds BZR digest information for a given revision. You can always check the correlation between charm revision and BZR revision.
<urulama> Mmike: if you need a hand, let me know which charm you're interested in
<Mmike> urulama: it's percona-cluster... current revision is -15, and I'd like to know what charm revision was at branch revno 40
<urulama> Mmike: ok, revision 15 has bzr revision 44, looks like percona-cluster-12 had bzr revision 40, but there is no such revision in the system.
<urulama> Mmike: have to check a bit what happened to all revisions < 13
<marcoceppi> Mmike: you can just cook up a new revision if you want to deploy rev 12 (bzr 40) of the charm
<marcoceppi> branch it, reset to revision 40, deploy charm
<Mmike> urulama: that's fine, thnx, don't bother. I'll try both 12 and 13 (that is, upgrade 12->15 and 13->15)
<Mmike> marcoceppi: or that, indeed! thnx!
<Mmike> I did all those tests before 15 got public, I just want to double-check now that its merged.
<ayr-ton> hatch: hatched, is you?
<hatch> ayr-ton: it is
<hatch> assuming you're talking about github :)
<ayr-ton> hatch: Ahahaha. So sorry about forget your issue. Working on this right now /o\
<hatch> ayr-ton: haha no problem at all :) I was just going through the issues and adding some features the other day and I thought I'd add a note to that bug
<hatch> s/bug/enhancement
<ayr-ton> hatch: Do you want me to use in the charm the JUJU_DEV_FEATURE_FLAG=actions for early adoption?
<hatch> ayr-ton: yeah I'd be ok with using new features for this functionality - using a config option feels pretty hacky to do this
<ayr-ton> hatch: Okay. Just a sec.
<Muntaner> hi :)
<ayr-ton> marcoceppi: https://github.com/ayr-ton/zabbix-charm | https://code.launchpad.net/~ayrton/charms/trusty/zabbix/trunk
<ayr-ton> it is working
<ayr-ton> just some bugs when depart a db relation, but I'm already working in fix this.
<gadago> could anyone help me with the ubuntu openstack installer?
<gadago> no matter what install type i choose (Autopilot, Multi, Single) I can't get passed "Bootstrapping Juju"
<gadago> MAAS start the installer, etc, etc, seems to create network bridge for Juju, all seems to okay, but then just sits there at the login prompt
<gadago> the openstack installer just continues to say "Bootstrapping Juju"
<gadago> no idea what I'm doing wrong
<gadago> I've been following this guide here http://www.ubuntu.com/download/cloud/install-ubuntu-openstack
<marcoceppi> gadago: what architecture/hardware specs are the machines in maas, and how many machines do you hvae? do any machines ever appear allocated in the maas UI?
<lazyPower> mbruzek: you can file bugs against jujucharms.com here => https://github.com/CanonicalLtd/jujucharms.com
<gadago> marcoceppi, I have two virtual machines (As nodes in maas), plus a third physical node
<gadago> the virtual machines currently only have one NIC each (I can add more of course), the physical node has six
<marcoceppi> CPU/memory?
<gadago> virtual have 2 cpus, 4gb ram
<gadago> physical is 24 cpus 48gb ram
<marcoceppi> that should be enough, are they x86 arch?
<marcoceppi> I know at one point the minimum server requirement was like 8 machines, but that was a while ago. I'm not versed in all the new offerings
<gadago> x86 (amd64)
<marcoceppi> you're unable to bootstrap a machine this means there is a juju -> maas issue
<marcoceppi> So, this typically boils down to one of two things
<marcoceppi> 1) credentials issue
<marcoceppi> Either you hvae the wrong credentials, or the wrong server hostname for maas (or you can't actually talk to maas from where you're running the installers)
<marcoceppi> that last part is technically not a credentials issue, but we're just going to lump it in there
<gadago> marcoceppi, could be the second I feel
<gadago> by mass server is on 192.168.255.10 with my deployment network being 10.0.14.0/24
<gadago> default gateway is 10.0.14.10 (other IP on maas server)
<marcoceppi> 2) You don't have a machine that juju expects to find. By default juju uses a set of constraints to request a machine: 1 CPU, 700MB RAM, x86_64. Juju asks MAAS for a machine that meets this minimum requirement and MAAS says "OKIE DAY! here you go" or 409 CONFLICT
<marcoceppi> gadago: so, from where you're running these isntaller commands, can you ping the MAAS server?
<gadago> I'm running the openstack-install command on the maas serer
<gadago> I'm using this guide http://www.ubuntu.com/download/cloud/install-ubuntu-openstack
<marcoceppi> gadago: well, that's not going to be much of an issue then
<marcoceppi> gadago: in the maas web ui, can you see a machine allocated after running the insaller?
<gadago> marcoceppi, under nodes, it shows as "Allocated to root"
<marcoceppi> gadago: do all of them say that?
<gadago> no just the one at the moment
<marcoceppi> gadago: interesting
<marcoceppi> gadago: that machine that's allocated to root
<marcoceppi> can you ping it? does it have an address?
<gadago> that one has PXE booted, rebooted then ran the cloud-init script and setup a juju-br0 device on the 10.0.14.0/23 network
<gadago> I can ssh to it
<gadago> but that's as far as it goes
<marcoceppi> yeah, so it bootstrapped
<marcoceppi> gadago: do you have a jujud process running?
<gadago> no jujud, but there is a curl process of curl -sSfw tools from %{url_effective} downloaded: HTTP %{http_code}; time %{time_total}s; size %{size_download} bytes; speed %{speed_download} bytes/s  --retry 10 -o /var/lib/juju/tools/1.21.1-trusty-amd64/tools.tar.gz https://streams.canonical.com/juju/tools/releases/juju-1.21.1-trusty-amd64.tgz
<marcoceppi> gadago: ah, does this maas install have outside access?
<marcoceppi> I wonder if the http proxy isn't setup correctly
<marcoceppi> which would cause it to fail to download the juju binary, which would result in aa hung bootstrap
<gadago> marcoceppi, the one odd thing I have had with this test maas setup is that I have to set the gateway on the network for the maas config to the maas server itself, if I actually set the gateway to the network gateway, dns fails and the server wont pxe properly
<gadago> so in, short if I try to ping google.com, no reply
<marcoceppi> gadago: yeah, maas wnats to run both DNS and networking
<marcoceppi> well DNS and DHCP/Networking
<marcoceppi> so what you could/should do. Create an internal netowrk (sounds like you did 10.0.14.0/23) then setup iptables to forward that br device to the main eth device on maas server
<gadago> my maas server is of of course not a gateway
<marcoceppi> then setup dns forwarding from the maas server to an external dns server
<marcoceppi> that's the only way I've run MAAS, I'm not sure how to do it so it just runs DNS
<gadago> I already have dns forwarding sorted (google.com resolves on the node)
<marcoceppi> there's a drop down in the maas admin panel to switch it to DNS only
<marcoceppi> but you'd want to ask in #maas about dns only (non dhcp) maas setups
<marcoceppi> once that's all sorted, and you can wget/curl from within a node, bootstrap should work
<gadago> thanks marcoceppi, at least that's a bit of progress :)
<gadago> #maas has for some reason become an invite only channel
<gadago> so no help there
<marcoceppi> gadago: yeah, MAAS is pure awesome, but setting it up on less than 5 nodes in a network it doesn't own can take a bit of jumping since it's designed to manage multiple datacenters and racks of servers :)
<marcoceppi> gadago: that's, what, let me find someone to fix that
<gadago> marcoceppi, any ideas why maas does not work when not using it as the gateway? I know it's a dns issue, but I find that quite bizzare
<marcoceppi> gadago: I don't have enough experience. When I do MAAS testings my setup has an Intel NUC on it's own gigabit switch and the maas master works as the gateway
<marcoceppi> I'm not home at the moment so I can't try it out with DNS only setup
<gadago> marcoceppi, np
<gadago> marcoceppi, finally got the nodes routing through the maas server (with iptables forwarding), but still not luck :(
<marcoceppi> gadago: can you curl a payload when ssh'd on the node?
<gadago> not tried curl, but internet access on the node is now fine, adn can ping the maas controller on the other subnet just fine
<gadago> even getting a /var/lib/juju/tools/1.21.1-trusty-amd64/jujud process now
<gadago> mongod seems to be doing something to
<gadago> should this be a long process?
<gadago> "Install OpneStack in minutes" lol
<gadago> the openstack installer on the maas server has hung as well
<gadago> maybe not enough grunt?
<marcoceppi> gadago: well, is maas set to use the fast installer?
<marcoceppi> if not, it's going to take like 10 mins to get the node up alone
<marcoceppi> then about 2-3 mins for juju to bootstrap itself
<marcoceppi> but if you've got jujud you're on your way
<marcoceppi> do you have anything in /var/log/juju ?
<gadago> using the fast installer
<gadago> maybe I'm just being impatient then
<gadago> the elapsed time on the openstack installer stop counting up that's all, and the little blue/purple graph stops moving
<gadago> I'll check the log in a moment
<marcoceppi> gadago: well clock stopping is never a good sign
<gadago> :)
<gadago> WARNING juju.cmd.jujud machine.go:602 determining kvm support: INFO: Your CPU does not support KVM extensions
<gadago> this one is a VM
<gadago> maybe I should try the Multi option for the installer
<bdx> I've a concern reguarding the ceph charm. It seems when the config ceph-cluster-network is specified in the ceph charms config, and is different from the specified ceph-public-network, the charm will fail deployment because a second network interface is not brought up on the ceph-cluster-network.
<nicopace> hi guys... one question... how can i destroy my local lxc environment, without using 'juju destroy-environment local' (it is not responding, as i got out of disk space :S)
<bdx> Does anyone know a way around this?
<travis_> hi all
<travis_> i am suffering here with a maas environment, trying to get juju to bootstrap
<travis_> i keep getting this curl failed
<travis_> Attempt 1 to download tools from https://streams.canonical.com/juju/tools/releases/juju-1.21.1-precise-amd64.tgz...
<travis_> curl: (7) Failed to connect to streams.canonical.com port 443: No route to host
<travis_> tools from https://streams.canonical.com/juju/tools/releases/juju-1.21.1-precise-amd64.tgz downloaded: HTTP 000; time 2.711s; size 0 bytes; speed 0.000 bytes/s Download failed..... wait 15s
<travis_> though i can ping streams.canonical.com from the maas box
<travis_> any thoughts
<travis_> hi all i am suffering here with a maas environment, trying to get juju to bootstrap i keep getting this curl failed Attempt 1 to download tools from https://streams.canonical.com/juju/tools/releases/juju-1.21.1-precise-amd64.tgz... curl: (7) Failed to connect to streams.canonical.com port 443: No route to host tools from https://streams.canonical.com/juju/tools/releases/juju-1.21.1-precise-amd64.tgz downloaded: HTTP 000; time 2
<travis_> .711s; size 0 bytes; speed 0.000 bytes/s Download failed..... wait 15s though i can ping streams.canonical.com from the maas box any thoughts
<lazyPower> nicopace: have you tried juju destroy-environment local -y --force ?
<nicopace> lazyPower: let's see what happens with that command...
<lazyPower> nicopace: if that doesn't work, the next step woudl be to lxc-destroy a container or two - and then triage the environment accordingly (juju wont be aware that you've puled the rug out o n those containers)
<nicopace> lazyPower: nothing happens
<lazyPower> nicopace: sudo lxc-ls --fancy, find a running machine or two that you can destroy
<nicopace> so... you say i have to destroy some of the lxc containers by hand?
<lazyPower> the destroy-environment --force should have nuked it from orbit
<lazyPower> but if the containers are still running and you have no disk space left - lets try removing one and seeing if we can get the env to respond
<nicopace> there is no output
<lazyPower> there wont be
<nicopace> they are all stopped
<lazyPower> and juju status should say teh environment is no longer bootstrapped
<nicopace> sudo lxc-ls --fancy
<nicopace> NAME                       STATE    IPV4  IPV6  AUTOSTART
<nicopace> ---------------------------------------------------------
<nicopace> juju-precise-lxc-template  STOPPED  -     -     NO
<nicopace> juju-trusty-lxc-template   STOPPED  -     -     NO
<nicopace> ubuntu-local-machine-48    STOPPED  -     -     YES
<lazyPower> interesting... you have a local machine of id 48, that is hanging around.
<nicopace> juju status
<nicopace> ERROR Unable to connect to environment "local".
<nicopace> Please check your credentials or use 'juju bootstrap' to create a new environment.
<nicopace> Error details:
<nicopace> cannot connect to API servers without admin-secret
 * lazyPower blinks
<lazyPower> well thats a new error message on me re: admin-secret
<lazyPower> do you have a local.jenv file in ~/.juju/environments/ ?
<nicopace> i just want to nuke everything up
<nicopace> no
<nicopace> but i had some minutes ago
<nicopace> !
<lazyPower> ok, so its supposedly gone then, without a jenv, there is no environment as far as juju is concerned
<nicopace> but juju doesn't respond
<nicopace> ok... i can bootstrap a new env
<lazyPower> sudo initctl list | grep juju
<lazyPower> do you see any juju services running?
<nicopace> nothing
<nicopace> no
<lazyPower> ok, good - your state server and api are gone
<nicopace> (i've destroyed the recently created env first)
<lazyPower> you have one hanging machine that i suggest you remove unles that machine is important to you
<nicopace> no
<lazyPower> sudo lxc-destroy -n ubuntu-local-machine-48
<nicopace> so how i destroy it?
<lazyPower> after that you should be g2g to run `juju bootstrap`
<nicopace> ok
<nicopace> and what about the other two?
<nicopace> those are just templateces lazyPower
<nicopace> ?
<lazyPower> nicopace: you can leave the templates around, unless you want to fetch the 200mb cloud image and wait for them to be recreated again
<nicopace> no
<nicopace> ok
<lazyPower> those are created when you request a deployment for the series on the local provider.
<nicopace> so... now i should be ok to start again
<nicopace> thanks lazyPower
<nicopace> !
<lazyPower> correct
<lazyPower> np nicopace, lmk if you have any further issues
<nicopace> (y)
<lazyPower> keep up the good work and reports on teh list :)
<sinzui> wwitzel3, how goes bug 1417875? I think I will postpone it to 1.21.3 as I don't think a fix will be available in a few hours
<mup> Bug #1417875: ERROR juju.worker runner.go:219 exited "rsyslog": x509: certificate signed by unknown authority <canonical-bootstack> <logging> <regression> <juju-core:Triaged by wwitzel3> <juju-core 1.21:Triaged by wwitzel3> <juju-core 1.22:Triaged> <https://launchpad.net/bugs/1417875>
<wwitzel3> sinzui: sadly I don't think I'll have a fix in the next few hours no
<sinzui> okay, thank you wwitzel3
#juju 2015-02-10
<blr> Are there any known race conditons when using apt-mirror and apt-http-proxy? juju is unable to acquire a dpkg lock.
<sarnold> blr: a dpkg lock is entirely local to the one computer; probably another apt process is running on the machine, perhaps via unattended-upgrades or another mechanism; ps auxw | grep -i apt  would probably point it out
<blr> sarnold: fairly certain there is no other apt process other than that used by juju, and this only seems to occur when using apt-mirror.
<sarnold> blr: hmm, I haven't tried apt-mirror, but it doesn't feel like something I'd expect to happen..
<blr> sarnold: I am running squid-deb-proxy locally with apt-http-proxy, which is potentially relevant.
<sarnold> blr: I use squid-deb-proxy too, lovely thing :) hehe
<blr> sarnold: oh goodness yes.
<blr> particularly when you're in new zealand, and archive.ubuntu.com isn't exactly fast :)
<sarnold> blr: owwwwwww
<sarnold> blr: it's not even fast from .us  :)
<blr> heheh
<sarnold> .. I can't imagine, those poor kiwis trying to move all those packets
<blr> there's a reason they're endangered.
<sarnold> hehe :) good luck sorting it out, I'll be curious to hear what might cause your lock problem
<blr> yeah, bit of an odd one. Once my local cache is warm it isn't a big deal, but would be nice to have both the mirror and the apt proxy working together.
<Flopy69> Helllo!
<Flopy69> What are the things to check when "JuJu sync-tools" does not work?
<hazmat> Flopy69: depends on the error.. pastebin?
 * lazyPower puts on his charm reviewer hat
 * lazyPower flips on the head lamp and dives into teh review queue
<lazyPower> https://github.com/whitmo/bundle-kubernetes - if someone has 5 minutes, a brief once over and a "yes i feel confident i have enough information to dive in with this readme" would be great.  we're having a bit of an internal debate on how much wall of text we need to add here  or if its a no - pointers to what you feel are hazy is invaluable to me
 * whit|review-q puts on review cape
<whit|review-q> is there a reference for all the environment variables one can set to affect the behavior of the juju client?
<whit|review-q> hiya hazmat!
<lazyPower> whit|review-q: not to my knowledge
<hazmat> whit|review-q: greetings
<hazmat> whit|review-q: there is in the public docs, its mostly up to date even ;-)
<whit|review-q> hazmat, I remember you commenting on that in the past...
 * whit|review-q goes to find it
<hazmat> whit|review-q: yeah. its been mostly resolved since then
<whit|review-q> https://juju.ubuntu.com/docs/reference-environment-variables.html
<lazyPower> nice
<lazyPower> mbruzek: https://bugs.launchpad.net/charms/+source/openmanage/+bug/1420475  - as promised
<mup> Bug #1420475: CI infrastructure reports failure due to hardware limitations <openmanage (Juju Charms Collection):New> <https://launchpad.net/bugs/1420475>
<wwitzel3> blahdeblah_: were ever able to figure out if you could get me access to that juju env? re: https://bugs.launchpad.net/juju-core/+bug/1417875
<mup> Bug #1417875: ERROR juju.worker runner.go:219 exited "rsyslog": x509: certificate signed by unknown authority <canonical-bootstack> <logging> <regression> <juju-core:Triaged by wwitzel3> <juju-core 1.21:Triaged by wwitzel3> <juju-core 1.22:Triaged> <https://launchpad.net/bugs/1417875>
<blahdeblah_> wwitzel3: I've got a meeting about it this morning; I'll ask the question again.
<blahdeblah_> wwitzel3: What's the best launchpad group to subscribe to the repo, once it's approved?
#juju 2015-02-11
<Mmike> Hi, lads. When using amulet, how can I ennumerate/enlist/get/fetch all the deployed units?
<gnuoy> jamespage, got a sec for https://code.launchpad.net/~gnuoy/charms/trusty/openstack-dashboard/1420708/+merge/249303 ?
<jamespage> gnuoy, what does -t do?
<gnuoy> jamespage, it creates a default pin at priority 990 using the specified release string
<gnuoy> otherwise precise/universe has priority over precise-updates/cloud-tools/main
<gnuoy> and python-six doesn't upgrade
<jamespage> gnuoy, remind me what the default priority is?
<gnuoy> jamespage, http://paste.ubuntu.com/10171980/
<jamespage> gnuoy, hmm - don't we want the version from precise-icehouse though?
<gnuoy> jamespage, argh, ok, I'll be back....
<gnuoy> jamespage, mp updated
<jamespage> gnuoy, looks ok - but I'd probably just add python-six to packages
<jamespage> and install in one hit
<gnuoy> I was keeping it seperate so it could be removed later since, as you say, the root cause is  a pkg dep error
<gnuoy> but I don't feel strongly
<gnuoy> jamespage, adding it to packages won't work
<gnuoy> as it's filtered on installed packages
<jamespage> gnuoy, ah yes - of course
<jamespage> gnuoy, +1
<gnuoy> ta
<hazmat> jamespage: new charmhelper unitdata.py landed might be useful if you need to keep state in charms
<lazyPower> Mmike: there's a dictionary of all units in the topolgy exposed by d.sentry.units
<lazyPower> d being your deployment object
<lazyPower> exit
<jcastro> lazyPower, ok how do I find the test results for the mariadb charm?
<jcastro> tvansteenburgh, is there a way to see charm test results for one individual charm over time?
<tvansteenburgh> jcastro: http://reports.vapour.ws/charm-summary/meteor
<jcastro> ah, was in the wrong section then,t hanks
<jcastro> marcoceppi, we should probably decide what we'll deploy @ SCALE
 * marcoceppi nods
<whit> rick_h_, were there any conclusion wrt bundle inheritance?
<whit> *s
<rick_h_> whit: that what was being asked for was more bundle composition vs inheritance and there's some work with core folks to see if we can think about bundles in a way that would allow that
<whit> rick_h_,  composition works though I would imagine better support for what deployer does now would be less churn
<whit> but with so few users
 * whit shrugs
<rick_h_> whit: these were just the feedback there. What they hoped to accomplish wasn't really something inheritance could do
<rick_h_> whit: so it was decided just going back and adding inheritance via the bundle definition wasn't going to solve it either
<noodles775> Is the m3.small too small for the bootstrap node? Or why is the m1.small the default, but m3.small (which is the recommended transition from m1.small) not an option? http://paste.ubuntu.com/10177609/
<jrwren> noodles775: https://bugs.launchpad.net/juju-core/+bug/1373516 ?
<mup> Bug #1373516: Switch default instance type from m1.small to t2.small/m3.medium for EC2 provider <ec2-provider> <juju-core:Triaged> <https://launchpad.net/bugs/1373516>
<jcastro> lazyPower, https://launchpad.net/ubuntu/vivid/+source/mariadb-5.5
<jcastro> we have it for ppc64le
<lazyPower> oh nice
<lazyPower> my search foo stinks apparently
<jcastro> so all that needs to happen is add a config option
<jcastro> it took me a while to find it
<lazyPower> http://packages.ubuntu.com/search?keywords=mariadb&searchon=names&suite=utopic&section=all
<jcastro> that site doesn't show ppc64le
<lazyPower> oh boo :(
<noodles775> Thanks jrwren
<blr> Is there a convention for managing pip download caches in charms? Have source tarballs in files/ so perhaps there?
<marcoceppi> blr: you could cache them anywhere in teh charm, I don't think there's any one convention defined
<blr> marcoceppi: yep, ended up adding a pip-cache directory under files which the deploy make target updates now. thanks
<mwenning> stokachu, ping
<stokachu> mwenning: pong
<mwenning> stokachu, a customer just pinged me - he's trying to to run an openstack install using a 14.10 LDS server, 14.04 MAAS and 14.04 target machines.
<stokachu> cool
<mwenning> stokachu, his machines name the NIC interface as 'p1p1' - this causes an error when it tries to create juju-br0
<mwenning> ever heard of this?
<stokachu> yea there is another bug on gh where it was reported
<stokachu> not sure its an installer issue though
<mwenning> groovy, what's the bug #
<stokachu> mwenning: https://github.com/Ubuntu-Solutions-Engineering/openstack-installer/issues/349
<mwenning> :-)
<stokachu> should probably create a launchpad bug against juju
<mwenning> That was my next question...
<stokachu> i haven't looked that far into it so i dont know who to start with first
<stokachu> i would assume juju if its configuring the br0 network before a network device rename takes place
<stokachu> but that poster never got back to me
<mwenning> ok, do you want to create the bug or me?
<stokachu> mwenning: go for it if its from a customer
<stokachu> ill subscribe the team to it
<stokachu> and work with juju guys to reproduce
<mwenning> stokachu, ok.  I'm gonna get some data from him, I'll ping you when I get the bug written up
<stokachu> mwenning: cool, get a sosreport of his system too
<stokachu> attach to the bug
<stokachu> mwenning: thanks
<mwenning> ah, yes.  does that pull everything in or do you need other stuff?  for example maas wants a tar of /etc/maas/*, etc
<stokachu> mwenning: it should pull maas data in as well
<stokachu> it checks if its installed and acts on that
<mwenning> stokachu, groovy, will do
<stokachu> cool man
<marcoceppi> mwenning stokachu I think you can outline some of the bridge stuff in juju config
 * marcoceppi greps through go src code
<stokachu> marcoceppi: cool i need to take a look at that
<stokachu> marcoceppi: has something to do with udev device renaming
#juju 2015-02-12
<wwitzel3> blahdeblah_: not sure if you caught that earlier since I didn't direct it at you, but replacing the rsyslog-*.pem files in /var/log/juju with the ones from machine 0, and then restarting jujud for the machines is a manual work around.
<ayr-ton> marcoceppi: Sending a merge request by tomorrow for the charms-tools enhancement.
<ayr-ton> marcoceppi: Do you want me to send the merge request for the zabbix charm too? This one is in your account.
<blr> am I right in thinking that I only need to provide keys in a helper.RelationContext's required_keys to get keys from a related service, or do I need to implement get_data()? Getting 'Incomplete relation' and having some trouble debugging it.
<blr> the other service/charm has implemented provide_data(), returning a dict with the same keys in the related charm's required_keys
<Muntaner> hi guys
<marcoceppi> hey Muntaner
<jcastro> http://askubuntu.com/questions/584716/what-does-juju-destroy-service-do-exactly
<jcastro> some good questions coming in folks!
<jcastro> marcoceppi, this one's for you: http://askubuntu.com/questions/584582/importerror-no-module-named-charm-toolbox-pairinggroup
<lazyPower> jcastro: https://askubuntu.com/questions/584716/what-does-juju-destroy-service-do-exactly/584746#584746 - added an answer
<lazyPower> my nested lists look like dookie though. i need some markdown school
<jcastro> on it
<mwak_> :)
<lazyPower> oh you just tab it. nice.
<lazyPower> thanks jcastro
<marcoceppi> jcastro: why is that for me?
<marcoceppi> that's not charm-tools
<marcoceppi> this isn't even juju related
<jcastro> I saw charm tool at the top
<marcoceppi> jcastro: charm is something else, the python project is charmtools
<sebas5384> revagomes: ping
<sebas5384> hey guys! we are at the Drupal Conference contributing for the Drupal charm
<sebas5384> https://github.com/sebas5384/charm-drupal
<marcoceppi> sebas5384: o/
<sebas5384> hey marcoceppi !
<sebas5384> we have some problems with internet so its gonna be a slow code sprint hehe
<marcoceppi> sebas5384: lmk if there's anything we can do to help!
<sebas5384> thanks marcoceppi we are preparing the environment here
<sebas5384> so actually we are waiting downloads and that kind of stuff
<gnuoy> Am I right in thinking that you can't have an amulet sentry on a subordinate?
<gnuoy> I'm writing some amulet tests for a subordinate charm and when I look at the available sentries I don't see one for the subordinate
<marcoceppi> gnuoy: upi cam
<marcoceppi> gnuoy: yo ucan't, but we don't use sentries anymore
<marcoceppi> that's a limitation in juju, can't have subordinates on subordinates
<marcoceppi> gnuoy: what version of amulet do you have?
<gnuoy> marcoceppi, 1.9.0-0ubuntu1~ubuntu14.04.1~ppa1
<gnuoy> marcoceppi, what does upi stand for?
<marcoceppi> yeah, this might be a bug where it's not showing up in the units list, we use juju-run under the hood now instead of sentries
<marcoceppi> gnuoy: that was a lot of fat finger action
<marcoceppi> gnuoy: "you can't"
<gnuoy> ahh :)
<marcoceppi> then I went to hit backspace but mashed return instead
<marcoceppi> it's cold in my office today
<gnuoy> marcoceppi, thanks for the help, much appreciated. I'll have a bit more of a dig and raise a bug if needs be.
<blr> could someone please point me towards a good example of a charm on LP using charmhelper services helper.RelationContexts? Think I must be missing something fundamental that isn't clear in the docs.
<bdx> I am trying to ascertain an understanding of the os-data-network, and flat-interface configs on the nova-compute charm. Does anyone know if these are refering to the same network, if only one or the other should be used, in what case does either matter?
<bdx> It seems to me that the os-data-network should be the network on which the bridge-ip and flat-interface belong.... possibly I am missing something..
<bdx> Any takers??
<marcoceppi> blr: cory_fu should be able to point you in the right directoin
<blr> marcoceppi: hey, thanks - I'll ping him.
<marcoceppi> bdx: I'm not certain, but we have a whole team of charmers that work on openstack charms. The problem is their Europe based. There's a mailing list you can mail with your question though
<marcoceppi> bdx: you can also mail the juju mailing list in general (juju@lists.ubuntu.com)
<bdx> marcoceppi: Thanks!
<catbus1> Hi, if juju state server fails to start, is there a way to export the MongoDB from this system and import it to a new env?
<catbus1> Is this http://docs.mongodb.org/manual/core/import-export/ what I should follow?
<marcoceppi> catbus1: noo idea, there's a juju backup but I'm not sure what that entails
<marcoceppi> and the juju backup doesn't comply to plugin rules
<marcoceppi> catbus1: but it does look like it does a mongoback
<catbus1> marcoceppi: thanks for the pointer.
#juju 2015-02-13
<Crypticus> anyone know how to make the juju-gui bind to all interfaces?
<Crypticus> or just change which interface it binds too?
<rick_h_> Crypticus: it's not something exposed in the charm atm.
<rick_h_> Crypticus: we'd welcome a bug on that if you have a need on that, but typically there's just the one public IP in question when you deploy the charm.
<Crypticus> rick_h_, turns out... it is by default
<rick_h_> Crypticus: I'd love to hear what you're up to and how we can help
<rick_h_> Crypticus: oh, then even better heh
<Crypticus> I thought that was the issue but it looks like an external network issue
<rick_h_> I know we've added custom port but nothing on custom network interface
<rick_h_> Crypticus: cool, then glad to hear we've got you covered
<Crypticus> Well I am trying to install OpenStack with Juju using LXC.
<Crypticus> I am using the manual provider because the openstack-installer way doesn't seem to make a usable deployment
<Crypticus> So far so good.
<rick_h_> Crypticus: hmm, yea the one I know that the folks have been working on is this one: https://jujucharms.com/openstack/ but that's more maas/bare metal I think than lxc
<Crypticus> I got 10 LXC containers on a HP Proliant Gen9 I used the download ubuntu template, installed openssh-sever and dbus and configured my network to use a macvlan bridge for external access and the veth for internal communication
<rick_h_> sounds like a party :)
<Crypticus> just got juju to bootstrap and got the gui installed on machine 0.  Couldn't figure out why my windows machine couldn't access th GUI but turns out my IT has something screwed up in the DNS.
<Crypticus> rick_h_, thanks for offering to help
<Crypticus> This is a lot of fun... but there is a lot of magic in the juju and it's hard to debug still
<Crypticus> rick_h_, could you tell me how to restart the juju gui, I just changed my environment.yaml to include an admin-secret and I'm hoping that will let me login with a more rememberable password
<Crypticus> My thought is to do juju ssh 0 and restart manually service juju-gui, but I assume that will not get the new password
<rick_h_> Crypticus: hmm, the environments.yaml isn't read once things are bootstrapped. I'm actually not sure how you'd change that up as it's alive in the environment at this point.
<rick_h_> think of it like a template used to create a document, changing the template doesn't update the documents you've already created
<Crypticus> rick_h_, OK... well I am going to destroy and redeploy the service and see if that works, if not... more digging
<Crypticus> wish the docs mentioned this upfront
<rick_h_> Crypticus: well it's the admin password of the environment
<rick_h_> so just redeploying the GUI won't do much
<rick_h_> thumper: any way to update admin-secret of an already running env?
<rick_h_> ^
<thumper> hey
 * thumper thinks
<thumper> well
<thumper> how about you tell me what you are wanting to do...
<thumper> admin secret has many initial masters
<Crypticus> how about changing .juju/environments/manual.jenv
<Crypticus> then redeploying?
<rick_h_> thumper: after bootstrap, change the admin-secret so the gui will let me login with a shorter or more memorable password
<thumper> sure
<Crypticus> Trying to change the admin-secrete
<thumper> juju user change-password
<Crypticus> LOL
<Crypticus> nice
<rick_h_> well there you go then :)
<rick_h_> thumper: that's in 1.21?
<thumper> um...
<Crypticus> I figured it must be something easy... but how do you find these commands?
<rick_h_> Crypticus: just to check what version of juju you're running
<thumper> let me look
<thumper> yep
<thumper> juju help commands
<thumper> juju help user
<thumper> help FTW
<thumper> rick_h_: now, for the record
<thumper> we aren't changing admin-secret
<Crypticus> OK... did that... but did think to look for user... I was looking how to restart a service... didn't find that
<thumper> what we are changing is the user password for the admin user
<Crypticus> thanks a lot thumper and rick_h_
<thumper> Crypticus: np
<thumper> admin-secret is still used by the state server machine agent to talk to mongo
<thumper> I think
<thumper> Crypticus: services are kinda etherial
<thumper> Crypticus: you restart units not services
<thumper> or machines
<Crypticus> hrm... OK
<Crypticus> cool
<thumper> units are instances of a service
<rick_h_> thumper: yea, gotcha
<rick_h_> thumper: but since we support user/login now I think it should work out ok
 * thumper nods
<thumper> should do
<thumper> now I can soon land my patch that changes the initial user from 'admin' to the logged in user
<rick_h_> ooooh, that's going to mess up the world lol
<thumper> rick_h_: will the gui / quickstart handle that
<Crypticus> well thanks so much guys... I can't wait to document all this I think it's going to help some people that need to test openstack in a really lightweight manner (LXC)
<thumper> np
<thumper> Crypticus: good luck
<rick_h_> thumper: yea, I think we'll be ok. The big thing is just a ton of docs, blogs, history around that admin user
<thumper> yeah
 * rick_h_ wonders how many scripts have that hard coded in ops/etc
<rick_h_> thumper: also I thought-dumped in your JES doc
<rick_h_> hopefully useful or not, let me know if we want to chat/walk through stuff as we've got a few steps to get where you're headed I think and now that the code/etc is real I'm nervous about a couple of bits
<thumper> rick_h_: sure... did you want to chat now or start of next week?
<thumper> rick_h_: we are right up against implementing this
<thumper> rick_h_: but I wanted to get it written down and agreed on
<rick_h_> thumper: I can do my best now if that's helpful
<rick_h_> or start of next week can also work
<thumper> quick iteration on the doc is faster than rewriting code
<thumper> lets chat now
<rick_h_> rgr
<nicopace> hi guys, i'm having a problem with amulet
<nicopace> when i do:
<nicopace> logstash_indexer_agent = d.sentry.unit['logstash-indexer/0']
<nicopace> curl_response = logstash_indexer_agent.run("curl http://127.0.0.1:9200/index/_search?size=1")[0]
<nicopace> print(curl_response)
<nicopace> the output says: http://paste.ubuntu.com/10197382/
<nicopace> (ssh: REMOTE HOST IDENTIFICATION HAS CHANGED)
<sarnold> hah, love that it appears something tries to execute the results of the ssh output.
<nicopace> but it shouldn't
<nicopace> i've been implementing tests for almost 2 months with this methodology
<nicopace> and this is the first time it happens
<nicopace> any idea what could it be sarnold?
<nicopace> i've found a similar error in a bug that is already patched (1 year ago) here: https://bugs.launchpad.net/ubuntu-ci-services-itself/+bug/1283198
<mup> Bug #1283198: WARNING: REMOTE HOST IDENTIFICATION HAS CHANGED <airline> <Ubuntu CI Services:Fix Released by doanac> <Ubuntu CI Engine:Fix Released by doanac> <https://launchpad.net/bugs/1283198>
<sarnold> nicopace: sorry, I just found that bit entertaining..
<nicopace> oh... ok
<nicopace> well.. i'll ask the list, thanks!
<gnuoy> dosaboy, do you have anytime to take a look at https://code.launchpad.net/~gnuoy/charms/trusty/neutron-openvswitch/1421215/+merge/249535 ?
<dosaboy> gnuoy: not right now but will in a few
<gnuoy> thanks
<cjwatson> Hi, I'm trying to get the local provider working for me on vivid, but having no luck doing even the simplest things.  Here's my log: http://paste.ubuntu.com/10203407/
<cjwatson> Could anyone have a look?  LXC normally works for me, but juju doesn't seem to be getting as far as invoking lxc-anything here, judging from ps and strace, there's nothing interesting in any log I can find, and I'm using upstart since I know juju doesn't work with systemd on the host yet.
<hazmat> cjwatson: hmm.. there's an http server normally on local provider acting as storage, it looks like there's some issue there uploading the charms to it. is there anything in the machine-0.log
<hazmat> cjwatson: machine-0 == host for local provider, there's normally something per default in ~/.juju/local
<hazmat> for the logs
<hazmat> cjwatson: the http server for storage on local provider defaults to port 8040 fwiw
<cjwatson> hazmat: machine-0.log hasn't had anything since I bootstrapped, last line is '2015-02-13 11:50:03 DEBUG juju.worker.logger logger.go:45 reconfiguring logging from "<root>=DEBUG" to "<root>=WARNING;unit=DEBUG"' and I've had an attempted deploy running for a minute or so
<cjwatson> hazmat: jujud is listening on 8040, but stracing it during an attempted deploy shows nothing but futex syscalls
<cjwatson> oh and epoll_wait.  uninteresting stuff like that
<cjwatson> there's a little bit of chatter with mongod as well, but nothing about 8040 as far as I can see
<hazmat> cjwatson: its a little odd normally i would suspect a firewall, but 500 errors sounds the web server is responding with an error, if you hit it directly via curl i'm curious what it does
<hazmat> cjwatson: flipside is destroy with force and try bootstraping again
<cjwatson> hazmat: I've done the latter multiple times
<cjwatson> $ sudo netstat -anp | grep -w 8040
<cjwatson> tcp        0      0 10.0.3.1:8040           0.0.0.0:*               LISTEN      2991/jujud
<cjwatson> curl http://10.0.3.1:8040/   hangs
<cjwatson> oh wait, don't tell me this is a proxy thing
<hazmat> cjwatson: doh.. thats possible no_proxy="localhost" ?
<hazmat> er. no_proxy="10.0.3.1"
<cjwatson> hazmat: running juju deploy under "env -u http_proxy" doesn't help, but perhaps it's jujud's environment that matters?
 * cjwatson retries with no-proxy in environments.yaml
<cjwatson> ah, victory
<cjwatson> hazmat: thanks for the clues!
<TimNich> when juju bootstrapping in maas, are the same cluster boot images used as for a normal node enlistment?
<stub> Has anyone else been having provisioning problems with the local provider since 1.21.1 ? provisioning seems to stop sometimes, and no new instances until I've destroyed and rebootstrapped the environment.
<stub> I think there was an lxc update around the same time, so I don't know if it is juju or lxc
<stub> I can't locate any useful logs to diagnose further.
<jcastro> kirkland, peer review at your convenience: http://askubuntu.com/questions/585150/what-is-the-difference-between-a-snappy-apps-and-charms-and-click/585187#585187
<rick_h_> jcastro: I'd suggest mentioning a bit charms reactive to others when related/interfaces/etc as a different as well.
<adalbas> hi! one question related to units and relations . If I add a unit after the relation is added, the charm hooks should be able to apply the new unit with any configuration needed for the relation, am i right?
<jcastro> right
<adalbas> jcastro, so, in case i'm working with two services, a manager and a client, and I add a unit in the client, would all hooks from manager-relation and client-relation run?
<marcoceppi> adalbas: yes
<marcoceppi> while relations are created on a per service context, hooks are executed on a per unit level
<marcoceppi> so addign a new client would initiate the relation cycle for that new unit to the manager again
<adalbas> marcoceppi, that would run the manager-relation-joined and client-relation-joined as well, and the changed
<adalbas> if i have configure those
<jcastro> any ~charmer have time to copy precise/transcode to trusty?
<jcastro> the made up silly tests are passing. :p
<adalbas> marcoceppi, the add-unit to my charm worked before i add the relation, but not after, so I'm trying to check if my assumptions are correct
<jcastro> hey jose
<jcastro> kwmonroe has been working on some big data stuff, I think we should sync up with him and talk about what to show @ SCALE?
<lazyPower> jcastro: pulled the transcode charm, doing local verification before i promote. did we get a bug filed for this?
<jcastro> there's no bug afaict
<lazyPower> https://bugs.launchpad.net/charms/+source/transcode/+bug/1421767
<mup> Bug #1421767: Promote transcode charm to trusty <transcode (Juju Charms Collection):New> <https://launchpad.net/bugs/1421767>
<lazyPower> kirkland: if you want a bundle for trusty, i'll need a ticket and a new bundle repository attached
<lazyPower> but working the charm now, should have this done shortly.
<lazyPower> jcastro: just got a ping from jose on telegram, he's having internet issues and says "if you can hop on can you ping me there"
<nicopace> hi guys, how can i provide extra files to a particular charm (for example deb files, or config files)? Ex. data/extra.vcl for Varnish, or elasticsearch.deb for Elastisearch?
<jcastro> nicopace, check this out: https://api.jujucharms.com/v4/trusty/elasticsearch-8/archive/config.yaml
<jcastro> you can just have the download location be a config option (with a default)
<jcastro> or you can just bundle whatever in the charm, but that might make it annoying to update
<nicopace> sure... but if i want to provide a deb file (a feature provided by the charm)
<jcastro> for config files, then that's easy
<jcastro> sure
<nicopace> i'm testing the charm
<jcastro> yeah
<nicopace> so that is one of the functions provided by the charm
<nicopace> jcastro: any idea about my question?
<jcastro> which one? about bundling a file? sure you can do that
<jcastro> just make like a /data or /payload directory or something in your charm
<jcastro> then you can put whatever you want in there, config files, debs, etc and reference that directory from your other hooks
<jcastro> does that answer your question?
<nicopace> so, i need to 'fork' the charm in order to add files?
<Crypticus> rick_h_, You around?
<lazyPower> mbruzek: can you kick off the transcode ci test again? it was apparently kicked out
<mbruzek> lazyPower: ack
<mbruzek> lazyPower: cs:trusty/transcode-1?
<mbruzek> or cs:precise/transcode-1
<lazyPower> mbruzek: cs:precise/transode-1
<rick_h_> Crypticus: what's up?
<Crypticus> I was going to ask if you know how to make juju deploy Juno Openstack packages
<Crypticus> I think I might have figured it out, but still testing
<rick_h_> Crypticus: cool, I've no idea tbh. We can probably find out but it's hitting late friday and folks will start heading out
<Crypticus> OK thanks
<Crypticus> Ill do more digging
<Crypticus> I found the juju get <service> command, that might be all I need
<Crypticus> thanks again!
<nicopace> jcastro: ^
<jcastro> nicopace, if you're modifying an existing one yes, you need to fork it
<nicopace> yes, i want to test elastiserch and varnish
<jcastro> yeah fork it, then include the debs in a directory, then modify the install hook to reference the local deb instead of the default source
<nicopace> jcastro: that's the strange thing... the charm already provides the functionality to look for a deb package inside the charm
<jcastro> oh ok, so that's good
<nicopace> well... i'll see what i can do :)
<jcastro> does the varnish charm have that too?
<nicopace> something similiar... in it's readme it recommends that if you want to add rules, to add an extra.vcl to the charm :S
<nicopace> (instead of adding it as a config option)
<mbruzek> lazyPower: I kicked the transcode tests again this time the *right* way
<lazyPower> mbruzek:  <3 preciate you
<mbruzek> lazyPower: if you want to follow along from home http://juju-ci.vapour.ws:8080/job/charm-bundle-test-aws/73/console
<nicopace> guys... i'm having this error with an example deploy for logstash-agent: Error': u'cannot add relation "auth-proxy:juju-info kibana:juju-info": principal and subordinate services\' series must match',
<nicopace> auth-proxy: cs:~paulcz/precise/auth-proxy-0
<nicopace> kibana: cs:trusty/kibana-3
<jose> jcastro: pong, sure!
<jose> got my internet connection back \o/
<marcoceppi> nicopace: the error states taht you can't put a precise subordinate on a trusty service. The series must match for primary to subordiante
<jose> still, you can link mix-match non-subordinate services and series
<Crypticus> have a great weekend everyone!
#juju 2015-02-14
<trave> does anyone have a suggestion what to try next?  vagrant box list
<trave> derp, didnt mean to send yet...
<trave> vagrant box list, lists JujuBox, but when i try to do a vagrant up, it says: Bringing machine 'default' up with 'vmware_fusion' provider...
<trave> ==> default: Box 'JujuBox' could not be found. Attempting to find and install...
#juju 2016-02-15
<agunturu> marcoceppi: Indeed that was the problem. Now itâs working.
<agunturu> Thanks for responding
<marcoceppi> agunturu: cool, np - sorry it was a bit delayed
<stormmore> so I just noticed an oddity
<stormmore> I deployed the openstack-base bundle and it looks fine except for some reason I now have  41 machine requests
<marcoceppi> stormmore: that's a bit excessive
<marcoceppi> stormmore: it should only use 4
<firl_> any openstackers on?
<gnuoy> hi firl_, whats up?
<firl_> So every now and then one of my nodes goes down for some odd reason
<firl_> this case it was a node with neutron-api
<firl_> and a compute resource was on this
<firl_> when rebooting of the physical machine completed, networking would not come up on this for the instances until I moved neutron-api to another juju host
<firl_> ( kept getting timeouts, no dhcp packets on the qrouter etc )
<firl_> Is there a better way to fix the issue then moving neutron-api to and from another host? I have had to do this several times between different environments
<gnuoy> Was there anything useful in the neutron-server log on the neutron-api host?
<gnuoy> jamespage, can you remind what that bug was with tunnels not being recreated ?
<firl_> I destroyed the neutron-api host, I will gather that next time if I see it.
<firl_> Is there a way to force the recreation of the qrouter/qdhcp ip namespaces?
<admcleod-> regarding reactive, does @when_file_changed work in conjunction with say @when?
<apuimedo> jamespage: ping
<apuimedo> trying to use lxd in xenial with zfs
<apuimedo> http://paste.ubuntu.com/15073291/
<apuimedo> I was following http://www.jorgecastro.org/2016/02/12/super-fast-local-workloads-with-juju/
<jamespage> apuimedo, still here
<apuimedo> ?
<jamespage> apuimedo, yeah you'r hitting the lxd 2.0 problem that I'm patching into my master branch built from source
<jamespage> jcastro, are you deploying on 14.04 or 16.04?
<jamespage> apuimedo, that might be the different - the way lxd takes config updates changes, and I think14.04 still has the older version
<jamespage> apuimedo, still reviewing midonet - hope to get through today
<apuimedo> I'm using 16.04
<jamespage> I have a few fixes for tests - I'll give them back to you once I know they work
<apuimedo> as the post was describing
<jamespage> apuimedo, hmm
<jamespage> jcastro, can you confirm that alls good and that your deployment is actually using zfs?
<apuimedo> jamespage: was your patch done to address this config issue?
<jamespage> apuimedo, yes
<jamespage> apuimedo,  I run with master + https://github.com/juju/juju/pull/4131.patch
<apuimedo> jamespage: can I haz amd64 binary package with the fix?
<neiljerram> Good morning!
<neiljerram> I wanted to ask about the current upstreams for nova-compute, nova-cloud-controller and neutron-api
<neiljerram> In other words, if I'm working on enhancements to those charms - as I am for Liberty support for Calico networking - against exactly what upstreams should I propose changes?
<jcastro> jamespage: I am deploying 14.04
<jcastro> jamespage: I noticed over the weekend though that controller can be really flaky, like if I fire one up and leave it works
<jcastro> but tearing it down and resetting it up over and over again eventually fails and I need to kill the container by hand, etc.
<jcastro> jamespage: I also noticed that destroying just models messes up the controller, like I have to kill the entire controller every time.
<tvansteenburgh> dpb1: you around?
<jamespage> apuimedo, https://code.launchpad.net/~james-page/charms/trusty/midonet-agent/trunk/+merge/286059 and -agent is good to go
<apuimedo> thanks jamespage. I'll review them now
<apuimedo> jamespage: one question about the lsb_release and get_os_codename
<apuimedo> doesn't this change make it more difficult to test trusty and xenial codepaths once both are supported?
<apuimedo> what do you usually do for the openstack-charmers charms
<jamespage> apuimedo, well we generally mock everything out - I'm running your tests on xenial - so they currently fail
<jamespage> unit tests should be deterministic across underlying ubuntu release
<jamespage> if you want to test xenial, have specific tests to cover that with appropriate mocking.
<apuimedo> ok
<apuimedo> jamespage: I'm actually running the tests on arch linux :P
<jamespage> \o/
<apuimedo> jamespage: the other thing is
<jamespage> apuimedo, I try to avoid anything that relies on the host os
<apuimedo> I guess that with the rmmod thing you are just checking if we are in a container
<apuimedo> and refuse to do the action if we are, is that right?
<apuimedo> is that related to running on lxd?
<apuimedo> jamespage: merged
<jamespage> apuimedo, awesome
<apuimedo> jamespage: thanks for the suggestions
<jose> does anyone know where the office hours were streamed?. I can't find them on ubuntuonair
<rick_h__> jose: yes, they were on the onair site. It's on the page here https://www.youtube.com/channel/UCSsoSZBAZ3Ivlbt_fxyjIkw
<jose> rick_h__: uh, ok. we have a channel for livestreams, but looks like you guys used another one. np though, thanks for the pointer!
<apuimedo> jcastro: any idea about the error I posted earlier following the steps on your blog?
<apuimedo> http://paste.ubuntu.com/15073291/
<apuimedo> in the bootstrap step
<jcastro> hmm, no idea on that one
<jcastro> did you perhaps launch some containers in the pool before setting up the config?
<apuimedo> nope
<apuimedo> not even the one example in the post
<apuimedo> jcastro: clean xenial install too
<jcastro> hmm, no idea on this one
<jcastro> have you posted to the list?
<apuimedo> jcastro: http://paste.ubuntu.com/15076104/
<jcastro> there are people more expert than me on the list
<apuimedo> no, noet yet
<apuimedo> I wanted to see first with you if there was something I was missing
<jcastro> that looks the same as what I have
<apuimedo> http://paste.ubuntu.com/15076139/
<jose> apuimedo: what exactly is going on?
<apuimedo> I don't have it as home pool though
<apuimedo> I used "juju" as name
<jose> ok I just read the pastes, let's see...
<apuimedo> jose: I can't bootstrap juju on lxd
<apuimedo> (with zfs backend)
<jose> apuimedo: would you mind running `sudo lxc-ls --fancy`?
<apuimedo> jose: empty
<apuimedo> ubuntu@lxd:~$ sudo lxc-ls --fancy | pastebinit
<apuimedo> You are trying to send an empty document, exiting.
<apuimedo> ubuntu@lxd:~$
<jose> so there's that image, error says 'image or container' is using the pool. would it be much to ask to delete that image and then retry bootstrapping?
<_Sponge> jose, ARe there any videos going up today ?
<jose> _Sponge: sorry?
<_Sponge> Are there any videos being published on Juju or UbuntuOnAir channels, today ?
<_Sponge> BRBack.
<jose> I... don't think so?
<jose> it's a US holiday today as well
<jose> and I don't think there's any announced broadcasts
<apuimedo> hola
<apuimedo> I mean, yes
<apuimedo> wrong conversation
<apuimedo> xD
<jose> :P
<apuimedo> jose:  http://paste.ubuntu.com/15076545/
<jose> what the...
<jose> isn't juju supposed to download the image and create an instance and all of that?
<apuimedo> jose: jcastro had pulling the image as a step
<apuimedo> that's why it was on the list
<jose> I'm not too familiar with lxd deployments, I was trying to do some basic debugging. but apparently the error messages contradict themselves...
<marcoceppi> apuimedo jose no you have to do lxd-images import first
<jose> ohai marcoceppi
<marcoceppi> https://jujucharms.com/docs/devel/config-LXD#images
<jamespage> apuimedo, hmm - did something change in the key imported code? I'm getting timeouts on the key import right now
<jamespage> trying to figure out whether its environmental or else...
<apuimedo> jamespage: nope
<apuimedo> unless we are having problems with our servers
 * jamespage scratches his head...
<jamespage> apuimedo, I think the interaction is with keyserver.ubuntu.com, but puppet is not exactly verbose about what timed out...
<apuimedo> marcoceppi: jose: that's what I had done
<apuimedo> to get the images
<apuimedo> and it exploded anyway
<apuimedo> lol
<apuimedo> I repeated the sync and then bootstrap again
<apuimedo> and, for no particular reason, it worked
<apuimedo> after all the weekend crashing
<jose> woot woot
<jose> glad things are working now :)
<apuimedo> jose: it gives me a bad feeling when things are so undeterministic
<apuimedo> but I guess it comes with the alpha state
<apuimedo> jamespage: how do you keep several environments in lxd without re-bootstrapping?
<jamespage> apuimedo, create-model
<jamespage> gnuoy, I need to switch lint -> pep8 in the tox configurations across the charms - ok if I do that as a trivial/
<jamespage> >
<jamespage> ?
<apuimedo> jamespage: and then juju switch I guess
<apuimedo> ok, done for the day
<apuimedo> thank you all ;-)
<bogdanteleaga> hello everybody
<bogdanteleaga> do charms need to be changed in any way for 2.0? I've got one charm I've been using for a while now for testing, but it doesn't even get downloaded to the machine
<bogdanteleaga> I'm using lxd
<bogdanteleaga> latest alpha+xenial
<apuimedo> jamespage: marcoceppi: is it possible that there's no amulet in juju-devel?
<marcoceppi> apuimedo: yes, amulet is only in ppa:juju/stable
<apuimedo> marcoceppi: so I can't run amulet tests with juju 2.0?
<marcoceppi> apuimedo: you can, you just need to also add ppa:juju/stable
<apuimedo> I hope I won't get conflicts :P
<marcoceppi> you won't
<marcoceppi> it's safe to combine devel and stable ppa. You'll get devel of juju but all the other packages
<apuimedo> ok
<apuimedo> marcoceppi: I must be doing something lame http://paste.ubuntu.com/15079852/
<marcoceppi> apuimedo: let me take a look
<marcoceppi> apuimedo: it's not you, it' sme
<marcoceppi> I've just uploaded it for xenial, it was only available on wily and older
<marcoceppi> apuimedo: give it about 10 mins to show up
<apuimedo> thanks marcoceppi ;-)
<apuimedo> marcoceppi: it's smee? https://www.youtube.com/watch?v=bnh6ZDKOVOI
<stub> marcoceppi: I think I will move the reactive framework PostgreSQL charm to git://git.launchpad.net/postgresql-charm and stop taking merge proposals on the old branch.
<marcoceppi> stub: +1
<marcoceppi> stub: I'd also probably just delete the branch as soon as charm push comes out
<stub> marcoceppi: Delete which branch? The lp:charms/trusty/postgresql one?
<marcoceppi> stub: yeah
<stub> charm push will be able to hold a copy of the built charm, but I'd like a git branch to hold a copy too (for sites that can't use the store). Is it a stupid idea to keep that in the same git repository as the main branch?
<stormmore> marcoceppi: yeah it is excessive but the bundle only shows 4 so I am not sure where the 41 phantom machines came from
<stormmore> well actually 31 forgetting about the 10 containers that I am using
<marcoceppi> apuimedo: it's in xenial now
<apuimedo> thanks marcoceppi ;-)
<apuimedo> marcoceppi: installed!
<magicaltrout> writing a reactive charm without help, this is where I find out how much effect the alcohol had, or didn't.....
<redelmann> :P
<stormmore> well sounds like you should be fine since that sentence was cohesive
<magicaltrout> not this evenings alcohol, that is only just beginning
<magicaltrout> i'm wondering how much information i actually persisted in belgium :P
<bdx> thedac: whats up
<bdx> charmers, openstack-charmers: I need to get some input on how haproxy configs are rendered to /etc/haproxy/haproxy.cfg by the openstack services
<bdx> charmers, openstack-charmers: for example, I see here -> http://bazaar.launchpad.net/~openstack-charmers/charms/trusty/neutron-api/trunk/view/head:/hooks/neutron_api_utils.py#L165
<bdx> that the haproxy context is generated by context.HAProxyContext
<bdx> charmers, openstack-charmers: but where in the codebase is the context written to /etc/haproxy/haproxy.cfg ?
<bdx> from what I can tell, http://bazaar.launchpad.net/~openstack-charmers/charms/trusty/neutron-api/trunk/view/head:/hooks/neutron_api_utils.py#L320
<bdx> takes care of rendering the context into templates that are defined in the resource_map
<bdx> charmers, openstack-charmers: but how is the haproxy.cfg rendered for percona-cluster?
<schkovich> hi guys
<schkovich> i been trying to setup multi-users environment with no success :(
<schkovich> im getting environment "fred-local" not found
<schkovich> i diligently read documentation and followed Managing multi-user environments document :)
<marcoceppi> schkovich: which version of juju?
<schkovich> marcoceppi: 1.25.3-trusty-amd64
<schkovich> marcoceppi: it's supported since 1.21? right?
<marcoceppi> schkovich: uh, I'm not sure
<schkovich> marcoceppi: managing multi-users environments is in 1.24 docs
<marcoceppi> then, yes
 * magicaltrout wonders when the, lets run alpha and trunk builds in search of cool stuff, ethos will come back to haunt him :)
<marcoceppi> magicaltrout: longer than you'd think but sooner than you'd want
<magicaltrout> hehe
<magicaltrout> indeed!
<marcoceppi> schkovich: so, what steps are you taking?
<schkovich> i followed docs, juju user add fred -o /tmp/fred-local.jenv and so on
<schkovich> marcoceppi: this document https://jujucharms.com/docs/1.25/juju-multiuser-environments
<marcoceppi> schkovich: let me install 1.25 and give it a whirl
<schkovich> marcoceppi: thanks a loooot! :)
<schkovich> marcoceppi: let me know if i can provide more information
<schkovich> marcoceppi: unfortunately there is nothing in logs :(
<schkovich> marcoceppi: i can confirm that user fred is added and enabled; however when i su as fred im getting environment not found;
<schkovich> marcoceppi: i can confirm that user fred is added and enabled; however when i su as fred im getting environment not found;
<marcoceppi> schkovich: I think you need to give the fred user something
<schkovich> marcoceppi: something like? a pint of bear? ;)
<marcoceppi> schkovich: hah, I think the admin needs a pint of beer - fred needs to know the environments endpoint though
<schkovich> marcoceppi: lol; he does; confirmed :)
<magicaltrout> a pint of bear?! sounds gizzly....
<schkovich> marcoceppi: that is in jenv file
<schkovich> marcoceppi: variable addresses
<schkovich> marcoceppi: though that diverts from documentation
<marcoceppi> I just got a 1.25 juju environment bootstrapped
<schkovich> marcoceppi: ok; im not going to bug you any more
<marcoceppi> schkovich: okay, I can confirm what you're seeing
<marcoceppi> let me see if I can get this owrking
<marcoceppi> schkovich: interesting. It's not reading from the .jenv cache
<schkovich> schkovich: it's not reading $FRED_HOME/.juju/environments ?
<schkovich> marcoceppi: perhaps some environment variables are needed?
<marcoceppi> schkovich: it expects ~/.juju/environemts/<env>.jenv
<marcoceppi> but it's not even getting that far.
<schkovich> marcoceppi: exactly
<schkovich> marcoceppi: same problem is present in 1.24 i tried to set it up in early january but did not have time to dig into the problem
<marcoceppi> schkovich: yeah, going to try in 2.0-alpha2
<marcoceppi> see if that's any better
<schkovich> marcoceppi: i moved a step further: 2016-02-15 23:53:30 WARNING juju.api api.go:140 discarding API open error: invalid entity name or password
<marcoceppi> schkovich: it works really well in 2.0-alpha2 :\
<schkovich> marcoceppi: ha
<marcoceppi> schkovich: yeah, i got that as well after moving things like state-server and such to the jenv file
<schkovich> marcoceppi: yes that's what i did as well
<schkovich> marcoceppi: is 2.0-alpha2 stable and reliable? in production?
<marcoceppi> no and nope
<marcoceppi> it's an alpha :\
<marcoceppi> it'll be released in ~ April though
<marcoceppi> and will be the recommended then
<schkovich> marcoceppi: will i have to change charms?
<marcoceppi> schkovich: no, charms written for 1.x are 2.0 compatabile
<marcoceppi> compatible*
<schkovich> marcoceppi: nice
<marcoceppi> schkovich: 2.0 is becaues some of the apis and commands are changing
<schkovich> marcoceppi: :( i have staging environment running on virtual maas and production in rackspace
<schkovich> marcoceppi: anyway, thanks; shall i file a bug report?
<marcoceppi> schkovich: I would
<schkovich> marcoceppi: will 1.* be maintained after 2.* is out?
<marcoceppi> 1.25.X will for a bit
#juju 2016-02-16
<marcoceppi> I can't remember, but probably a year or so after 2.0
<marcoceppi> so filing a bug is a good idea
<schkovich> marcoceppi: sure. i will file bug report; lounchpad or github?
<marcoceppi> schkovich: launchpad
<marcoceppi> schkovich: lmk when you do I can make sure people see it
<schkovich> marcoceppi: thank you for assistance once again :)
<schkovich> marcoceppi: have a nice day
<marcoceppi> np, hopefully the core team can sort this quickly
<schkovich> marcoceppi: it would be nice to have option to enable/disable functions per user
<marcoceppi> schkovich: AIUI ACLs will be coming to users in 2.0
<schkovich> marcoceppi: cool :)
<marcoceppi> schkovich: I just did a screencap of what sharing in 2.0 looks like, since I felt bad that we weren't able to get it working
<miken_> Is it possible to remove a service which was deployed to machine 0?
<blahdeblah> miken_: tried "juju destroy-unit unit-name/number"?
<miken_> blahdeblah: what I tried initially was destroy-service, which returned, but the unit is in an error state, so when I try to resolve, I just see /usr/games/sl output - I assume, warning me that if it proceeds it's going to break the env (not sure if that's something specific to the deployment environment)
<miken_> s/but the/but as the/
<blahdeblah> miken_: Hmmm - can't say I've seen that; Is that service deployed to other units as well?
<miken_> Nope, just machine zero (it's the ubuntu charm with basenode)
<blahdeblah> If it's in error state, you *might* be able to get it sorted just by "juju resolved ubuntu/0" (or whatever it's called)
<miken_> Right - it seems I was using `juju resolve` rather than `juju resolved` . Thanks
<jamespage> gnuoy, if you have a few seconds - https://code.launchpad.net/~james-page/charms/trusty/rabbitmq-server/tox-switchover/+merge/286139
<gnuoy> jamespage, sure
<gnuoy> jamespage, is the apt_cache unit test change deliberate ?
<jamespage> gnuoy, yes - the patching was way to far down the stack
<jamespage> gnuoy, so switched to patching cmp_revno instead
<gnuoy> ack, it just didn't seem relevant to a move to tox
<jamespage> gnuoy, prob is that apt is not in pypi, and I don't like relying on stuff outside of the venv
<jamespage> it would just work tbh
<jamespage> but its not deterministic
<gnuoy> kk, makes sense
<jamespage> gnuoy, well that broke osci
<jamespage> nice
<gnuoy> yeah, I had an inkling it might
<jamespage> apuimedo, morning
<jamespage> apuimedo, just running tests on https://code.launchpad.net/~james-page/charms/trusty/midonet-api/trunk/+merge/286146
<jamespage> and then midonet-api lgtm
<jamespage> I had to add cassandra to the tests to get the agents to generate host uuid's
<jamespage> at least I think that was the case - also had a problem with memory on compute nodes...
<apuimedo> mmm, It should not be necessary to have cassandra for the host uuid
<jamespage> apuimedo, that may be surplus then - I may have got confused with midolman shutting down due to lack of memory
<jamespage> apuimedo, let me drop that and re-check
<apuimedo> lack of memory for sure ;-)
<jamespage> 1.5G is not enough
<jamespage> upped the constraint on the test to 4G
<apuimedo> jamespage: good choice, sorry I forgot
<apuimedo> I usually put 6G
<jamespage> 4 works - the tests don't execute instances so there is enough
<jamespage> apuimedo, hmm - I think cassandra is required
<jamespage> midonet-agent won't install midolman untill the CassandraRelation is present
<apuimedo> oh, that's right. the software does not require it, but my charm does
<apuimedo> sorry, my bad
<jamespage> apuimedo, ok fixed in my branch - executing all of the amulet tests now to confirm thats all ok
<apuimedo> good
<apuimedo> :-)
<apuimedo> jamespage: did it work? (please say aye)
<jamespage> apuimedo, still running but looks like it will
<apuimedo> :-)
<jamespage> I see a long lag on the final hook excution on midonet-api
<jamespage> the hook is accessing the API, something times out and then it completes...
<jamespage> but not sure what
<jamespage> apuimedo, I think its that hang that I reported when I did my initial testing
<apuimedo> jamespage: in my openstack env it's due to to access to the 8080 not being propertly opened by juju
<apuimedo> haven't experienced it on maas though
<jamespage> apuimedo, we don't have security groups turned on:-)
<apuimedo> jamespage: anything on /var/log/tomcat/midonet* ?
<jamespage> apuimedo, http://paste.ubuntu.com/15090189/
<jamespage> its looping through those last few messages
<apuimedo> from Server - jetty?
<jamespage> apuimedo, not sure tbh
<jamespage> that;s the midonet-api.log file
<jamespage> apuimedo, /var/lib/juju/agents/unit-midonet-api-0/charm/hooks/host-relation-changed started running at 10:48
<jamespage> I suspect I'll hit some sort of response timeout shortly
<apuimedo> very odd
<apuimedo> if you go as root in the midoent-api container
<apuimedo> does `midonet-cli -e "host list"` show anything
<apuimedo> or does it also block
<apuimedo> ?
<jamespage> apuimedo, test just completed a pulled down the env...
<apuimedo> ok
<apuimedo> I guess
<jamespage> apuimedo, ok - so looking good for https://code.launchpad.net/~james-page/charms/trusty/midonet-api/trunk/+merge/286146
<jamespage> just running the mem tests as well to verify our firewall configuration for the midokura.com repos
<stub> Is 1.25.3 technically a development version, or did that even/odd thing get scrapped?
<apuimedo> jamespage: I found the reason
<jamespage> apuimedo, pray tell?
<apuimedo> it was blocking on random number generation
<jamespage> apuimedo, ahhhhhhh
<apuimedo> just tomcat things :D
<apuimedo> I'll send a patch to change the source
<jamespage> apuimedo, oh I remember this
<jamespage> prng on startup used to suck
<apuimedo> yup
<jamespage> apuimedo, sudo apt-get install haveged
<apuimedo> on v5 we are not using tomcat anymore
<apuimedo> haveged?
<jamespage> not saying thats a great solution, but it does increase entropy on virt systems
<apuimedo> jamespage: I was planning on modifyin tomcat config to have JAVA_OPTS="$JAVA_OPTS -Djava.security.egd=file:/dev/./urandom"`.
<jamespage> that works as well
<apuimedo> ;-)
<jamespage> apuimedo, mem +1 as well
<apuimedo> mem +1?
<jamespage> it passes amulet tests...
<jamespage> for -agent and -api
<jamespage> running amulet for neutron-agents-midonet now as well
<apuimedo> great
<apuimedo> jamespage: I can merge https://code.launchpad.net/~james-page/charms/trusty/midonet-api/trunk/+merge/286146 then, right? (there's the please don't merge comment still)
<jamespage> apuimedo, go for it
<apuimedo> thanks
<jamespage> I have a similar one for neutron-agents-midonet - just running the tests now
<apuimedo> cool
<apuimedo> thanks a lot for these improvments
<jcastro> apuimedo: hey any luck on your zfs issue?
<apuimedo> jcastro: it went away by itself trying for the 13th time
<jcastro> heh
<apuimedo> jamespage: jcastro: I would like add a package to cloud-archive for mitaka
<apuimedo> do you know who I should be speaking with?
<jcastro> not me, someone on james' team I would guess off the bat though
<apuimedo> cool, thanks
<apuimedo> jamespage: midonet-api patches successfully merged
<jamespage> apuimedo, ack
<jamespage> apuimedo, what's the package?
<jamespage> it needs to go into xenial development first
<apuimedo> jamespage: kuryr
<apuimedo> the libnetwork driver of kuryr, more specifically
<jamespage> apuimedo, pointer?
<jamespage> we're very close to feature freeze, but we could always go debian first if need be - its only a new leaf package
<apuimedo> jamespage: github.com/openstack/kuryr
<admcleod-> anyone know why 'juju status' would give me 'error: lock timeout exceeded' ?
<lazyPower> admcleod- is your bootstrap node out of disk space?
<lazyPower> *model controller
<admcleod-> lazyPower: nope
<admcleod-> lazyPower: nowhere near
<lazyPower> thats th eonly time i've seen that error
<jamespage> apuimedo, https://code.launchpad.net/~james-page/charms/trusty/neutron-agents-midonet/trunk/+merge/286181
<apuimedo> thanks jamespage
<apuimedo> jamespage: merged
<jamespage> apuimedo, ack
<jamespage> apuimedo, I have some general feedback on some further improvements - observability is not that great atm; but that won't block promulgation to the curated charm set
<jamespage> apuimedo, just letting the n-a-m tests complete...
<apuimedo> jamespage: what do you mean with observability? You mean putting status messages?
<apuimedo> like, connection to cassandra and zookeeper not ready
<apuimedo> for example?
<jamespage> apuimedo, status is one aspect yes
<apuimedo> what else?
<admcleod-> lazyPower: well.. removing '~/.juju/current.lock' fixed it..
<lazyPower> admcleod- well thats something i suppose
<lazyPower> admcleod- are you running stable or 2.0-alpha* series?
<admcleod-> lazyPower: stable
<jose> bdx: ping
<jamespage> apuimedo, ok - so all promulgated
<jamespage> https://code.launchpad.net/~charmers
<jamespage> they will injest into the charm store in the next few hours....
<apuimedo> yay!!!
<apuimedo> thanks a lot jamespage! I would really appreciate an email with the suggestions to improve ;-)
<jamespage> apuimedo, two comments on https://bugs.launchpad.net/charms/+bug/1453678
<mup> Bug #1453678: New charms: midonet-host-agent, midonet-agnet, midonet-api <Juju Charms Collection:Fix Released> <https://launchpad.net/bugs/1453678>
<jamespage> apuimedo, thanks for all of your work on the charms - glad to get you finally landed!
<_Sponge> | http://ubuntuonair.com/ | 50 minutees to-go :)
<balloons> marcoceppi, jcastro, do you guys have any ideas or interest in juju + google summer of code? projects / mentors?
<marcoceppi> balloons: yes
<marcoceppi> I've been trying to figure out how tasks and such for GSOC
<balloons> marcoceppi, we just need your ideas on https://wiki.ubuntu.com/GoogleSoC2016/Ideas
<balloons> the application deadline is Friday, but we don't need more than the snippets on the wiki page. Google has to accept our application, and then the solitication of students will happen. Ideas don't have to be fully baked just yet
<balloons> but we do need ideas + commitment to mentor
<apuimedo> jamespage: :-)
<apuimedo> thanks for bearing with my inexperience in charm land
<jamespage> apuimedo, you are most welcome....
<jose> rick_h__: mind a quick PM?
<rick_h__> jose: not at all
<ryotagami> Is this the correct place to ask about charmhelpers?
<jose> bdx: lmk when you have a sec?
<jose> ryotagami: I'd say so. ask ahead and someone will get back soon :)
<ryotagami> jose: OK.
<ryotagami> http://bazaar.launchpad.net/~charm-helpers/charm-helpers/devel/view/head:/charmhelpers/contrib/amulet/deployment.py#L61
<ryotagami> Line 61, by any chance the trailing comma is a typo? I don't think branch_location takes a tuple, which the trailing comma produces.
<stub> ryotagami: Yes, that certainly looks like a bug
<ryotagami> Ah, OK. Glad that it is confirmed.
<cory_fu> Reviews welcome: https://github.com/juju-solutions/layer-basic/pull/36
<cory_fu> Now you can say `@when('config.changed.my-opt')`
<lazyPower> nice!
<magicaltrout> here's a random question
<magicaltrout> if I develop a charm and I deploy it
<magicaltrout> then it gets accepted as a recommended charm and the namespace changes
<magicaltrout> do I have to undeploy my deployed charm to change to the recommended one?
<rick_h__> magicaltrout: no, you can switch over to it
<magicaltrout> thanks rick_h__ I have an up to date gitlab charm that I have to deploy for a client tomorrow but it would be nice to keep it in sync if I get it into the recommended namespace
<rick_h__> magicaltrout: yes, the upgrade-charm takes a --switch argument that let's you move to another url
<magicaltrout> coolio
<rick_h__> magicaltrout: so if it gets promoted you could update-charm --switch=cs:newurl
<agunturu> Is it possible to get the details about submitted juju action
<agunturu> juju action status only returns job id and status
<rick_h__> agunturu: juju action status ?
<rick_h__> agunturu: https://jujucharms.com/docs/1.25/actions
<agunturu> That just retrurns the job id and status
<agunturu> The python client actually provides what actions are submitted. Just wondering if there is a verbose version.
#juju 2016-02-17
<marcoceppi> agunturu: you can juju action status <UUID> or juju action fetch <UUID> to get more details
<metsuke> does anyone use juju with a cinder backend and find it to be robust?
<Razva> hey! is there any know bug regarding network interfaces not named eth?
<axino> hi charmers, can I get your attention on https://code.launchpad.net/~chris-gondolin/charms/trusty/nrpe/fix-sub-postfix/+merge/285692 ? cc gnuoy` thedac
<jamespag`> gnuoy, working through beisner's 16.02 testing sweepup merges
<gnuoy> jamespag`, ah, ok. I said I'd do that if you want to leave that to me
<jamespag`> gnuoy, i have cycles inbetween things
<gnuoy> kk, thank you
<Razva> hey! is there any know bug regarding network interfaces not named eth?
<magicaltrout> thats like the most cryptic question ever
<jamespag`> beisner, gnuoy: hey can we switch amulet test cases to use /usr/bin/env python instead of /usr/bin/python?
<jamespag`> I'd like to be able to execute them in a venv (think under tox) but right now that's not possible
<gnuoy> jamespag`, I have no objection
<Razva> magicaltrout I'm using Ubuntu 14 and trying to install OS using Juju. It seems that as long as I'm using ethX...the install will work. If I use the "native" naming (enoX, emX etc) it will basically...fail.
<Razva> OS = OpenStack
<magicaltrout> jamespag` and co are your people for that ;)
<jamespag`> Razva, can we see a log?
<jamespag`> will help with triage as to where you problem lies...
<jamespage> I was developing a tick
<magicaltrout> painful
<jamespage> beisner, your landme's are landed
<Razva> darn netsplits.
<marcoceppi> magicaltrout: I shot over a pull req https://github.com/OSBI/juju/pull/1
<magicaltrout> its like the 90's all over again
<marcoceppi> I really want m4 instance types in Juju
<magicaltrout> Razva: if you didn't see james asked for a pastebin log
<magicaltrout> oh thats mine marcoceppi thanks for that, I just ignored it as I assumed it was against the pull i submitted :)
<marcoceppi> magicaltrout: our builder will fail pull req that haven't been formated according to golangs `go fmt`
<magicaltrout> ah
<magicaltrout> sorry about that
<marcoceppi> wallyworld: this has been updated https://github.com/juju/juju/pull/4426 any chance we can get a JFDI?
<wallyworld> marcoceppi: np, doing it now
<wallyworld> will be in beta1
<marcoceppi> wallyworld: \o/
<jamespage> gnuoy, which bits of https://pastebin.canonical.com/148351/ still need review?
<jamespage> beisner, yah - https://code.launchpad.net/~james-page/charms/trusty/neutron-gateway/xenial/+merge/285862
<jamespage> has some more bits there as well
<beisner> jamespage, ah good deal
<wallyworld> marcoceppi: merged \o/
<marcoceppi> woop woop!
<marcoceppi> thanks for the fix magicaltrout
<magicaltrout> sweet
<magicaltrout> hopefully I'll find some time to actually learn some go and do something more exciting than copy and paste
<magicaltrout> right, I have a quick one about publishing and stuff that I don't quite get
<magicaltrout> so I have a charm in github
<magicaltrout> and supposedly I can publish that to the charm store
<magicaltrout> so in my local repo I can run: juju publish --from=. cs:~f-tom-n/trusty/gitlab cs:~f-tom-n/trusty/gitlab
<magicaltrout> or something like that?
<magicaltrout> even though I've not created anything in launchpad or bzr?
<marcoceppi> magicaltrout: theoretically, yes, it'll look something like this:
<magicaltrout> nice caveating
<marcoceppi> charm push .
<marcoceppi> since the charm command knows who you are, and the name of the charm it'll just upload to cs:~f-tom-n/trusty/gitlab
<marcoceppi> the new charm 2.0 stuff is private beta atm, but I can see about adding you if you're interested
<magicaltrout> hmm someone lied then and told me I could do it! :P
<marcoceppi> you could do it, if you have early access to the tool - I imagine we're pretty close to public beta though esp since we're submitting them to Xenial
<magicaltrout> hello there people who know more than I do
<magicaltrout> submitting a layered charm
<magicaltrout> how on earth do you do it?
<magicaltrout> or do you submit the built thing?
<rick_h_> magicaltrout: you submit the built thing
<magicaltrout> k
<rick_h_> magicaltrout: and submit any layers you've added to http://interfaces.juju.solutions/
<rick_h_> if they're useful to others out there
<lazyPower> or  include in your finally assembled charms readme a breadcrumb trail back to your upstream layers source control system for bug reporting purposes, and the ~charmers will appreciate your attention to detail.
<magicaltrout> ah yeah good point
<icey> any support yet for controlling a remote lxd controller / juju env from an OSX host? The lxd machine would be running Xenial and be accesible over a local network
<beisner> icey, that would just rock.  better yet, control multiple remote lxd hosts and voila, a rockin local cloud.
<icey> beisner: yes, that is why I'm excited about lxd!
<icey> I'm trying to set it up from the release notes now :)
<rick_h_> icey: no, we'll look at adding more awesome lxd support, but it'll happen in the next cycle
<icey> awww rick_h_ :(
<icey> looking at the docs I'm not sure why I couldn't use the remote lxd host?
<rick_h_> icey: it might be something you can hack up. I just think the juju provider assumes it's a local lxd
<icey> rick_h_: it looks like there's config for remote-url so I'm hopeful :)
<rick_h_> icey: let me know how it goes. I've been wrong before, but thought that didn't make the current cut for feature freeze
<icey> if I get it working, I'm blogging about it because it'll be like LOCAL PROVIDER ON A MAC!
<icey> lazyPower: ^^
<rick_h_> even better :)
<lazyPower> icey: you said the magic word!
<lazyPower> man its 2016 and i have cut exactly 0 blog posts :/ i feel like i'm slackin
<rick_h_> lazyPower: I'm comin for you!
<lazyPower> oye?
<rick_h_> lazyPower: for blog posts :P
<rick_h_> lazyPower: need to get you doing one on resources maybe. Surely you've got some charm that can use it :P
<lazyPower> tempt me with a good time whydontchya :D
<lazyPower> I have a few ideas
<lazyPower> and a weekend to make it happen
<rick_h_> lazyPower: good stuff, just giving you a hard time :)
<icey> so far so good, rick_h_:
<icey> ./juju bootstrap -m lxd
<icey> Bootstrapping model "lxd"
<icey> Starting new instance for initial controller
<icey> Launching instance
<icey> from my mac
<icey> :)
<rick_h_> icey: interesting
<icey> $ lxc list
<icey> +-----------------------------------------------------+---------+------------------+------+-----------+-----------+
<icey> |                        NAME                         |  STATE  |       IPV4       | IPV6 | EPHEMERAL | SNAPSHOTS |
<icey> +-----------------------------------------------------+---------+------------------+------+-----------+-----------+
<icey> | juju-12f338d5-04f6-4628-88cb-0d18179c8d74-machine-0 | RUNNING | 10.0.4.97 (eth0) |      | NO        |         0 |
<icey> +-----------------------------------------------------+---------+------------------+------+-----------+-----------+
<icey> on the remote host :)
<suchvenu> Hi Kevin
<jacekn> kjackal: cory_fu: hey. Thank for the review, I updated my collectd layer so if any of you has time I would love re-review: https://bugs.launchpad.net/charms/+bug/1538573
<mup> Bug #1538573: New collectd subordinate charm <Juju Charms Collection:Triaged> <https://launchpad.net/bugs/1538573>
<jose> question, let's say I have a config value that I want to be different for each instance, is there a set it with juju set?
 * icey sadface
<icey> lazyPower rick_h_: 2016-02-17 15:05:19 ERROR cmd supercommand.go:448 no registered provider for "lxd"
<icey> ERROR failed to bootstrap model: subprocess encountered error code 1
<icey> that's after we downloaded the tools
<icey> that's also in the cloud-init-output.log on the lxc container started to be the controller
<icey> but I can bootstrap this lxd environment from on that host
<kjackal> Thank you for your effort jacekn , I just changed the status of the bug ticket to new so that it comes back up on the review queue (I think it is needed)
<kjackal> I haven't seen anything like this jose
<icey> jose: the closest you will probably get is to deploy the charm multiple times to give them different configuration values as all units share the same configuration
<jose> right, right. I'll experiment a bit with actions and otherwise juju run. thanks!
<marcoceppi> icey: use --upload-tools
<icey> marcoceppi: ERROR failed to bootstrap model: no matching tools available
<marcoceppi> icey: that was /with/ upload-tools?
<icey> that may be the part that rick_h_ says is missing right now
<icey> yes marcoceppi, it gets a LOT farther without upload tools
<marcoceppi> what.
<icey> goes through to ERROR cmd supercommand.go:448 no registered provider for "lxd"
<icey> marcoceppi: machine I'm trying to bootstrap from is a Mac
<icey> if that helps
<marcoceppi> icey: you've got alpha2?
<icey> 2.0-alpha2-elcapitan-amd64
<icey> marcoceppi:
<jamespage> wolsen, re https://bugs.launchpad.net/charms/+source/cinder/+bug/1468306
<mup> Bug #1468306: Missing os_region_name for mult-region cinder with nova <cpec> <hitlist> <openstack> <sts> <cinder (Juju Charms Collection):Fix Committed by billy-olsen> <nova-cloud-controller (Juju Charms Collection):In Progress by billy-olsen> <nova-compute (Juju Charms Collection):In Progress by
<mup> billy-olsen> <https://launchpad.net/bugs/1468306>
<jamespage> cinder landed - nova-cc and nova-compute need a little change - but looking good
<marcoceppi> icey: interesting
<jamespage> mainly cause I jumped you with mitaka templates...
<jamespage> sorry...
<rick_h_> icey: hmm, yea I think it's something that we *should* be able to do without much more work, but with freeze and such it'll be something next cycle unfortunately
<rick_h_> icey: 2.0 has folks heads down atm
<icey> yeah rick_h_; ideally bootstrap lxd from mac will be there for 2.0 ;-)
<thedac> gnuoy: Can I ask you to merge this please. Already approved https://code.launchpad.net/~ajkavanagh/charm-helpers/add-service-checks-lp1524388/+merge/285604
<gnuoy> thedac, sure
<thedac> ta
<magicaltrout> whoop its only taken 2 days to write a very basic charm... its like belgium take 2 \o/
<lazyPower> magicaltrout - it gets easier every time you do it :)
<magicaltrout> hehe
<lazyPower> <3 me some layers and no more boilerplate
<lazyPower> and broken english apparently. but i digress, its quite pleasant :D
<magicaltrout> indeed, the chaming bit wasn't what took time, its the parsing of the configs and stuff for configuration changes that took some figuring out
<lazyPower> magicaltrout - we riffed on the bus right?
<magicaltrout> we did indeed
<lazyPower> nice :)
 * lazyPower adds you to the list "my people"
<magicaltrout> lol
<magicaltrout> er right
<magicaltrout> next random one
<magicaltrout> I running trunk.... as you do
<magicaltrout> i tried running bundletester but it seems to be using the non trunk install I have as its attempting to run bootstrap
<magicaltrout> even though my env is bootstrapped
<magicaltrout> sound plausible? and can i get it to use my trunk stuff
<magicaltrout> I have my PATH set to use the trunk binary
<marcoceppi> magicaltrout: IIRC, bundletester resets your path for you ;)
<marcoceppi> let me take a look
<magicaltrout> bonus
<wolsen> jamespage, thanks for the review, I'll take a look at the comments on nova-cc and nova-compute
<gnuoy> thedac, landed
<thedac> ta
<magicaltrout> other question
<magicaltrout> actually, scrap it
<magicaltrout> i'm an adult I can work it out ;)
<marcoceppi> magicaltrout: feel free to ask anyways :)
<magicaltrout> nope I'm cool, apart from the charm updating appearing sluggish ;(
<magicaltrout> https://jujucharms.com/u/f-tom-n/gitlab/trusty/ up to date gitlab for the masses, I shall write a few more tests and get it into the store
<marcoceppi> magicaltrout: nice!
<magicaltrout> yeah, I'm just migrating a client
<magicaltrout> so the stuff that isn't yet charmed I need to charm
<magicaltrout> so I'll do my best to get it to a decent standard for general consumption
<magicaltrout> I need to finish off the PDI charm me and kwmonroe were working on tomorrow
<magicaltrout> thats the missing piece of the puzzle for this migration, but all that is missing is some random actions, the majority of the charm now works
<kwmonroe> and what a fine charm that was magicaltrout!  the memories bring much joy ;)
<magicaltrout> lol
<kwmonroe> my favorite part was when cory_fu looked at our python in disgust.
<magicaltrout> hehe
<kwmonroe> exit 0 for life!
<magicaltrout> "pfft you could have done that shit in bash"
<magicaltrout> I stuck with it though, I did gitlab in bash as well for no real reason other than, I could ;)
<magicaltrout> s/bash/python
<kwmonroe> heh, awesome.
<cory_fu> :)
<jose> bdx: ping
<lazyPower> interesting, has anyone seen a charm actively dismantle security features such as  enabling password auth and root login via ssh?
<cory_fu> lazyPower: No.  Why would a charm want to do that?
<lazyPower> cory_fu - good question, and its been noted in the review
<icey> marcoceppi: I managed to get it working by bootstrapping from the lxd machine and copying over Juju's files to my mac
<icey> now I can work with lxd from my Mac, I just can't bootstrap
<icey> looks like it may be having some other issue now though, juju shows a new machine as pending for a while...it seems to be trying to get tools from the host machine rather than the bootstrap node -_-
<marcoceppi> icey: this is because the bootstrap state-server is probably configured as to the wrong address
<icey> I bootstrapped from the served lxd is running on, then tried depoloying from a remote machine; deploy worked, container came up, but then it looks like yeah, the state-server was configured wrong; the bootstrap container had an ip on both the lxcbr and the normal network
<icey> lazyPower: your consul charm seems to be broken now
<lazyPower> icey - maintainer is in metadata and the readme :D
<lazyPower> das not meeeeee
<icey> The Consul charm maintainers are: - Charles Butler ( @chuckbutler ) ...
 * lazyPower touches his nose
<lazyPower> whats goin on with it icey ?
<icey> hashicorp pulled everything off of bintray
<lazyPower> ah
<lazyPower> thats a config.yaml update
<lazyPower> \o/
<icey> DL links have to change to releases.hashicorp.com
<icey> not a version thing...
<lazyPower> can you update config.yaml, give a quick test and PR that back upstream?
<beisner> jamespage, all clear on:  https://code.launchpad.net/~james-page/charms/trusty/neutron-gateway/xenial/+merge/285862
<icey> there are literally no longer any versions on bintray
<lazyPower> thats fine
<beisner> thedac, ^ as a prev. reviewer on that, can you have another peek?
<lazyPower> you should be able to change the source_url
<thedac> beisner: sure
<lazyPower> to whatever, just grab the sha1sum of the bin and stuff that in config
<icey> no source_url...
<beisner> thedac, tyvm
<icey> lazyPower: i'm going to have to muck around in the hooks/consul.py to fix it
<lazyPower> icey rewrite the charm, give me a PR
<lazyPower> :D
 * icey seems to be doing that a lot lately
<lazyPower> we like your work, what can I say?
<lazyPower> icey matt and i can update that with a source_url/hash config option
<lazyPower> did they drop deb delivery all together?
<icey> it's now thgouth https://releases.hashicorp.com/consul/0.5.2/consul_0.5.2_linux_amd64.zip
<thedac> beisner: jamespage: merged \o/
<icey> with shasums at https://releases.hashicorp.com/consul/0.5.2/consul_0.5.2_SHA256SUMS
<beisner> thedac, right on!
<icey> lazyPower: almost ready to test this change and then I'll send a PR :)
<lazyPower> woo \o/
<lazyPower> icey - install_remote() will handle that natively
<icey> basically a lot of {0}_{1} => {0}/{0}_{1}
<icey> handle downloading the zip, download a separete shasum, confirm and drop in place?
<icey> lazyPower: ^
<lazyPower> should, the sum i think is expected to be handed to install_remote
<lazyPower> vs fetching the shasum file
<lazyPower> but it may just work, i haven't tried handing it a url
<icey> lazyPower: this way we can get the new shasum by just updating the version in config though :)
<icey> AND
<icey> it shouldn't break again given that it's /their/ own CDN to handle downloads
<lazyPower> we can hope
<marcoceppi> hey magicaltrout, where do you keep your gitlab layer? I have a few things I'd like to contribute
<magicaltrout> funny, I don't recall submitting my charm to the review queue ;)
<marcoceppi> magicaltrout: I'm really keen on gitlab, I tried to write a charm for it a few years ago, I run it myself manually
<magicaltrout> well
<magicaltrout> marcoceppi: the scale out stuff with omnibus isn't great, plus i'm not a fan of the complete package stuff either
<magicaltrout> but https://gitlab.com/gitlab-org/gitlab-ce/blob/master/doc/install/installation.md
<magicaltrout> I don't mind doing that
<magicaltrout> but that involves manually compiling Ruby
<magicaltrout> !
<marcoceppi> magicaltrout: well upstream doesn't always know what's best ;)
<marcoceppi> the omnibus is fine, we can just modify the configuration afterwards
<magicaltrout> okay, so install via omnibus, and an exploded shell script
<magicaltrout> and then if someone connects an alternative postgres then the hook updates the config and shunts the DB over to the separate DB?
<magicaltrout> yesterday i was thinking about doing similar with Nginx even though it doesn't support proper clustering and scale out yet
<marcoceppi> magicaltrout: thta's what I'm thinking, do an export and import of the database when related
<marcoceppi> on relation removal, remote dump the db locally then allow the removal
<marcoceppi> but that's a bit more involved
<marcoceppi> just starting out I like the idea of going single -> scaleout
<magicaltrout> aye
 * marcoceppi will submit a few pull req
<magicaltrout> okay marcoceppi, if you fancy tackling the DB as and when, I'll fiddle with optionally decoupling Ngnix and the crappy script
<marcoceppi> magicaltrout: sure, sounds fun!
<magicaltrout> if hacking DB connection hooks is your idea of fun :P
<balloons> so marcoceppi, I get to bug you about GSOC again. Have you gotten your ideas on paper yet? I can fill them in the wiki even if needed. We need to have them done by Friday to submit the app
<marcoceppi> balloons: yeah, I had some questions
<balloons> I know jose is ok with doing some charm tasks.. But we could just assign the rest to him too ;-)
<marcoceppi> balloons: is this targeted like Google Code In?
<balloons> sure, ask away
<jose> wat?
<jose> me wat?
<balloons> marcoceppi, no it's not the same. This is 1 on 1 mentoring, and over the summer. so May - August
<marcoceppi> balloons: then I have a few ideas of charms we'd want to write
<jose> and for university students who are 18 or older, and currently enrolled in a university program
<marcoceppi> that I can help get on paper
<marcoceppi> give me like, 40 mins, otp
<balloons> right, so presumably they have more experience than the hs students
<balloons> marcoceppi, brillant. jose, did you have charm ideas, or are you simply open to mentoring whatever marcoceppi comes up with?
<jose> balloons: I can probably be the backup for Marco's ideas
<balloons> tag-teaming them is a good idea
<lazyPower> charmschool all the things
<jose> lazyPower: you will mentor as well? fantastic!
<lazyPower> sure
<icey> is it possible to make a charm that will optionally be a subordinate?
<rick_h_> icey: not currently. It's declared in metadata to be a subordinate or not
<icey> rick_h_: that's what I thought; thanks!
<icey> is it possible to relate two charms through the peer relation?
<marcoceppi> icey: no, but you can use the same interface
<marcoceppi> in the provides/requires section
<icey> marcoceppi: so if I use the same interface, they will relate?
<marcoceppi> no
<marcoceppi> peers is only for inter-unit relations
 * icey is sad
<icey> I'll have to use the same hook for both a peer and vspecific relation then
<icey> specific*
<icey> alright, thanks marcoceppi!
<ChrisHolcombe> cmars, do you know if the az zone gets set for ec2 instances ?
<balloons> marcoceppi, how's it coming along?
<marcoceppi> balloons: any chance I can get back to you tomorrow?
<marcoceppi> I have to finish these FFe and I'm OTP atm
<balloons> marcoceppi, yep. Tomorrow is fine. Since they are in your head, it should just be as simple as finding some time to get them written down.
<balloons> thanks
<apuimedo> Good God gracious, the lxd deployer in juju 2.0 is so much faster than the juju-deployer in returning control
<apuimedo> good job guys
<apuimedo> the juju status is also much better
<marcoceppi> apuimedo: glad you like! it's been awesome to see what the juju-core people have been doing to make juju much nicer to use
<apuimedo> marcoceppi: the only thing I don't know is why the "overrides" section in my bundle doesn't work. Has it changed in v4?
<apuimedo> marcoceppi: agreed, the UX is miles better
<marcoceppi> apuimedo: overrides aren't officially part of the bundle spec, sadly
<marcoceppi> apuimedo: so they won't work in native juju deploy
<apuimedo> :(
<marcoceppi> apuimedo: we're working on defining how overrides should/will work in the next version of juju
<apuimedo> that's a bit inconvenient
<apuimedo> cool
<apuimedo> looking forward to that
<marcoceppi> apuimedo: it's not osmething we've forgotten, we just want to make sure the experience is good
<apuimedo> ;-)
<blahdeblah> Anyone know why juju mangles /root/.ssh/authorized_keys, and what I can do to fit in with that, whilst still enabling my charm to write a working entry in there?
<marcoceppi> apuimedo: Mark made some mention of "flavors" of a bundle onthe mailing list recently https://lists.ubuntu.com/archives/juju/2016-February/006545.html
<marcoceppi> blahdeblah: I wasn't aware juju mangles that - what provider are you using?
<blahdeblah> marcoceppi: openstack (in Canonistack)
 * apuimedo is ashamed. I've not followed the list recently
 * blahdeblah grabs sample
<marcoceppi> apuimedo: I admit I even have a hard time keeping up with all the mailing lists I watch
<marcoceppi> blahdeblah: interesting. I suppose it's so the operator can juju ssh <machine-#> which would be a root login
<apuimedo> yeah... With the openstack ones I have more than enough
<blahdeblah> marcoceppi: Except that the main point of it is to break that: http://pastebin.ubuntu.com/15103537/
<marcoceppi> blahdeblah: wat.
<blahdeblah> marcoceppi: read the fine pastebin :-)
<marcoceppi> blahdeblah: I did
<marcoceppi> that was my reaction
<blahdeblah> :-)
<marcoceppi> blahdeblah: I don't see that in my 2.0-alpha2 deployed service
<blahdeblah> I asked in #juju-dev (twice) why they do that, and I got crickets.
<marcoceppi> blahdeblah: is that a problem with the image you're using?
<blahdeblah> marcoceppi: this is stable
<marcoceppi> pfft, stable
<marcoceppi> who uses that! ;)
<blahdeblah> and the keys are clearly marked as being put there by juju - look at the last field
<marcoceppi> blahdeblah: sure, but the first parts of the file
<blahdeblah> marcoceppi: don't make me come over there! :-)
<marcoceppi> I'm not sure that's juju
<marcoceppi> let me boot up my 1.25.3 vm
<beisner> thedac, up for landing 2 test updates?  takes n-c-c and dashboard to the maximum passable test level, and marks the end this pass of test updates from me.
<beisner> https://code.launchpad.net/~1chb1n/charms/trusty/nova-cloud-controller/next-amulet-1602b/+merge/286392
<beisner> https://code.launchpad.net/~1chb1n/charms/trusty/openstack-dashboard/next-amulet-1602/+merge/285955
<thedac> sure, I'll take a look
<beisner> thedac, appreciate it
<thedac> beisner: just want to confirm that lines 215 - 220 are correct on the nova-cc. >= trusty_mitaka use legacy_ratelimit? Seems like that should be the "future" not legacy
#juju 2016-02-18
<beisner> thedac, ack on that.  driven by the mitaka template @ http://bazaar.launchpad.net/~openstack-charmers/charms/trusty/nova-cloud-controller/next/view/head:/templates/mitaka/api-paste.ini#L111
<jamespage> urulama, morning - our /next branches under https://jujucharms.com/u/openstack-charmers-next appear to have stopped injesting  - could you take a look?
<urulama> jamespage: will do
<urulama> jamespage: https://api.jujucharms.com/charmstore/v4/changes/published?start=2016-02-17
 * jamespage scratches his head
<jamespage> urulama, not sure whey https://jujucharms.com/u/openstack-charmers-next/neutron-api/trusty
<jamespage> != https://code.launchpad.net/~openstack-charmers-next/charms/trusty/neutron-api/trunk
<urulama> jamespage: https://api.jujucharms.com/charmstore/v4/~openstack-charmers-next/neutron-api/meta/extra-info/bzr-digest shows it is the last revision ... however, the revision history is different. we did migration of some legacy services from PS3 to PS4.5, there might still be some issues
<urulama> jamespage: also, https://api.jujucharms.com/charmstore/v5/~openstack-charmers-next/trusty/neutron-api/archive/tox.ini shows the change in code you did on Feb 16th
<jamespage> hmm so its probably just the change history that is foobar right?
<urulama> jamespage: yes
<jamespage> urulama, ok thanks
<urulama> jamespage: sorry for the confusion, but we're currently focused on new publishing process, keeping legacy ingestion alive but no improvements
<jamespage> urulama, np - understand
<jamespage> can't wait to switchover tbh
<urulama> :) as do we
<magicaltrout> okay I'm new to EC2 VPC having come from EC2 Classic. But whats Juju supposed to do wrt to firewalls and routing?
<magicaltrout> if I expose a service I would assume I can connect to it from the outside world
<magicaltrout> but I've had to slap on a custom rule in my VPC security group
<magicaltrout> expose seems to be doing absolutely nothing
<gennadiy> hi everybody. i have got some issues with my bundle again - new version doesn't appear in charm store -
<gennadiy> https://jujucharms.com/u/tads2015dataart/
<gennadiy> but it's pushed to launchpad - https://code.launchpad.net/~tads2015dataart/charms/bundles/tads2015-demo/bundle
<gennadiy> how much time do i need to wait until new version will be published?
<magicaltrout> last night was taking a couple of hours
<gennadiy> thanks, do i need to attach bugtracker to bundle ? point #4 - https://jujucharms.com/docs/1.25/charms-bundles
<magicaltrout> are you trying to get it into the recommended name space?
<gennadiy> no, i need userspace only
<magicaltrout> then you can ignore that
<gennadiy> thanks
<magicaltrout> the bundles will update eventually, I think its just running a bit slow
<smartbit> How do I verify the version of juju-gui? I want to test https://bugs.launchpad.net/juju-gui/+bug/1542652
<mup> Bug #1542652: juju-gui hangs on "Connecting to the Juju environment" <juju-gui:Fix Released by bac> <https://launchpad.net/bugs/1542652>
<magicaltrout> smartbit: can you do juju status --format yaml juju-gui
<magicaltrout> and get the charm version back?
<magicaltrout> some people make me so sad, a company I work for deem it acceptable to have an 11 minute Hive query running in a reporting tool
 * magicaltrout whips out the shotgun
<marcoceppi> magicaltrout: welcome to the bleeding edge
<magicaltrout> hehe
<smartbit> Which .box files do you recommend downloading with the newer juju-gui?
<magicaltrout> its fun
<magicaltrout> although a way to "export" running nodes and import them into a newly bootstrapped environment would be cool
<bac> smartbit: if you follow the same vagrant instructions you'll get the new juju-gui charm.  look forward to hearing if it now works for you.
<smartbit> bac: Thanks will give it a try. Tried ubuntu/trusty64-juju' (v20160210.0.0) and that had juju-gui/0 agent-version: 1.25.3.1
<bac> smartbit: that is the version of the juju agent, not the juju-gui
<smartbit> bac: all the output I got from "juju status --format yaml juju-gui"
<bac> smartbit: the default charm at cs:juju-gui has the fix and is the one that vagrant pulls
<bac> smartbit: you can check the version of the gui that is running by going to http://<your gui IP>/version
<smartbit> bac: https://cloud-images.ubuntu.com/vagrant/trusty/current/trusty-server-cloudimg-amd64-juju-vagrant-disk1.box works like a charm :-)
<smartbit> http://127.0.0.1:6079/version showed "version": "2.0.3"
<smartbit> Thanks for helping out.
<bac> smartbit: good news!  thanks for the bug report.  sorry for the inconvenience.  we'll keep an eye on vagrant in the future.
<smartbit> bac: took me quite some time. Fixed now.
<smartbit> Two other inconvenient items: 1) "Installing Virtualbox Guest Additions 5.0.14 - guest version is 4.3.36" which takes quite some time.
<smartbit> 2) after "You have not informed bzr of your Launchpad ID, and you must do this to
<smartbit> ==> default: write to Launchpad or access private data.  See "bzr help launchpad-login".
<smartbit> ==> default: Branched 19 revisions." the output scrolls some 200-500 lines in bold-red mostly with a single character, while receiving some data.
<smartbit> Should I file a bug for each of these? If so, what would be most appropriate topic?
<bac> smartbit: those should be filed against the vagrant image project...but i'm not sure where that is on launchpad
<bac> smartbit: i'll find out
<rick_h_> aisrael: ^ do you know?
<jamespage> icey, sorry but can you rebase https://code.launchpad.net/~chris.macnaughton/charms/trusty/ceph-osd/storage-hooks/+merge/284445 as well? showing a conflict
<icey> ack jamespage
<icey> done jamespage
<jamespage> icey, ta - I'll let osci run and then take a look
<aisrael> rick_h_: bac: Vagrant bugs can be opened here: https://launchpad.net/juju-vagrant-images
<rick_h_> aisrael: ty
<rick_h_> smartbit: ^
<bac> thanks aisrael
<smartbit> rick_h: thanks
<smartbit> bac: added comment to https://launchpad.net/bugs/1542652 regarding description of ws-secure. It seems to me ws_secure has three states (trivalent, ternary, or trilean) now: True, False and Unset. The description says it is Boolean.
<mup> Bug #1542652: juju-gui hangs on "Connecting to the Juju environment" <juju-gui:Fix Released by bac> <https://launchpad.net/bugs/1542652>
<bac> smartbit: juju does not support such a type.  sorry for the imprecision but i think the description does adequately describe it.
<smartbit> bac: understand. No prob.
<lazyPower> kjackal o/ heyo
<lazyPower> i hear you're having an issue with your OpenStack deployment?
<kjackal> yes indeed
<lazyPower> can you re-paste the bits about the symptoms?
<kjackal> request (https://cyclades.okeanos.grnet.gr/compute/v2.0/os-availability-zone) returned unexpected status: 400; error info: {"badRequest": {"message": "API endpoint not found", "code": 400, "details": ""}}
<kjackal> this happens when I try to bootstrap this private Openstack cloud
<lazyPower> beisner jamespage ddellav - have either of you seen this happen after standing up the openstack bundle?
<kjackal> I am not sure if this cloud runs on the latest release
<kjackal> no no wait
<kjackal> I didn't setup this cloud myself
<lazyPower> ah so this isn't charmed openstack?
<kjackal> no, it is not
<lazyPower> well that changes things a bit
<beisner> trying to juju deploy something on top of an openstack cloud using the openstack provider?
<kjackal> yes
<kjackal> the cloud already exists
<kjackal> they claim they offer an openstack API
<beisner> kjackal, what cloud is it?
<kjackal> I am wondering how does juju figureout the API version to use when talking to openstack
<beisner> kjackal, also any idea what openstack release it's running?   also, whether it's got keystone v1/v2 support?
<kjackal> the cloud is this one: https://okeanos.grnet.gr/home/
<kjackal> I am not sure about the versions
<kjackal> is there a way to check this?
<kjackal> Ahh wait
<kjackal> auth-url: https://accounts.okeanos.grnet.gr/identity/v2.0auth-url: https://accounts.okeanos.grnet.gr/identity/v2.0
<kjackal> sorry auth-url: https://accounts.okeanos.grnet.gr/identity/v2.0
<kjackal> so this should indicate it is a 2.0 ?
<beisner> ok good, there are some known issues w/ keystone v3 at the moment so that's not it
<beisner> kjackal, at this point, i'd tend to export the openstack credentials and poke around their cloud as your user with openstack clients.  there are a ton of variables in play.
<kjackal> sounds like a plan
<kjackal> any suggestions on the client?
<beisner> python-openstackclient python-novaclient python-keystoneclient
<kjackal> ah I see what you are saying! I got myself a weekend project then :)
<kjackal> are there any "hidden/special" config options that can be used in environments.yaml ?
<beisner> kjackal, yah, basically inspect the keystone catalog   keystone endpoint-list   and use nova to create/delete instances and security groups in the same way that juju would.   fyi --debug on the client cmds for eyebleeding detail.
<beisner> i'm not sure of any undocumented enviro options on this
<jamespage> dosaboy, re https://code.launchpad.net/~hopem/charms/trusty/cinder/default-to-glance-v2-api/+merge/284752
<jamespage> where is the cutoff for use of v2? if its icehouse, lets drop the config option altogether and just set v2 please
<jamespage> essex did not have cinder...
<beisner> rockstar, zul - i've got a 2-node lxd deployment topology in the amulet test, and i'm seeing inconsistency on lxc config options on units:
<beisner> http://pastebin.ubuntu.com/15111185/
<beisner> storage.lvm_thinpool_name: LXDPool
<beisner> seems like it should be on both units
<beisner> doh.  the 2nd unit also has an empty `lvs` whereas the first unit has the LXDPool volume group as expected
<dosaboy> jamespage: good point, and the only < I version we support still is E so i'll drop the config option
<dosaboy> jamespage: is it safe to just remove the config option altogether?
<beisner> rockstar, issue:  i can't file a stinkin' bug against the lxd charm until it promulgates some time down the line.  where do we track/file bugs?
<dosaboy> i guess if you upgrade it will ignore it and complain if you try to set it again, so should be safe
<gennadiy> hi everybody, i pushed my bundle 5 hrs ago - but it have not updated in store yet - https://code.launchpad.net/~tads2015dataart/charms/bundles/tads2015-demo/bundle
<gennadiy> i used juju bundle prof before commit, everything was ok
<jamespage> dosaboy, +1 yes - we may need to update bundles to reflect that change but I think that's fine
<jamespage> no point having a knob that's useless
<jamespage> beisner, updated my os dash mp - testing ok with the default bundle on xenial - lets see what amulet says...
<jamespage> dosaboy, only people we might annoy is those with that set in a bundle
<jamespage> but we can release note this
<rockstar> beisner: I'm not sure where we track lxd charm bugs.
<rockstar> lxd proper bugs are handled on github.
<jamespage> can someone do me a quick +1 on https://code.launchpad.net/~james-page/charm-helpers/fixup-midonet-liberty-nonmem/+merge/286524
<jamespage> testing ok with neutron-api
<lazyPower> jamespage - approved
<jamespage> lazyPower, ta
<thedac> beisner: nova-cc merged
<beisner> thedac, thx sir
<stokachu> latest juju 2.0 beta1 login api started giving me this error on login: this version of Juju does not support login from old clients (not supported)
<stokachu> I do pass Version: 2 into the parameters to utilize the older login api
<stokachu> here is the request parameters used: https://github.com/Ubuntu-Solutions-Engineering/macumba/blob/master/macumba/api.py#L59-L64
<stokachu> I also tried version 3 but same error message, any ideas?
<magicaltrout> I found similar today
<magicaltrout> the solution from the people that know was tear it down and start again ;'(
<magicaltrout> so I did :)
<magicaltrout> client <-> server mismatch
<stokachu> magicaltrout: that directed to me?
<magicaltrout> yeah
<stokachu> ok ill try again with a fresh bootstrap
<stokachu> magicaltrout: did that work for you? what api client are you using
<marcoceppi> stokachu: what do you use to draw the full screen openstack-installer stuff from python?
<stokachu> urwid is the toolkit
<stokachu> marcoceppi: ^
<rick_h_> marcoceppi: we used the same python tool in quickstart
<stokachu> marcoceppi: i also have a library for widgets i commonly use https://github.com/Ubuntu-Solutions-Engineering/urwid-ubuntu
<rick_h_> ooh, custom widgets ftw
 * marcoceppi wants to take a crack at juju-top
<stokachu> that would be cool
<stokachu> magicaltrout: doesn't look like that works for me
<stokachu> fresh bootstrap and the login api is failing
<magicaltrout> soory stokachu clearly I lied :(
<stokachu> wallyworld: https://github.com/juju/juju/commit/472e2d83a4edceea11e7dbee28c5bde78a920ce2 i think this is what is affecting me
<stokachu> wallyworld: i attempted to set my version to 3 but i must still be doing something wrong
<jamespage> dosaboy, so the multi l3 networks stuff - did you see my comments on the ceph one?
<dosaboy> jamespage: yeah i fixed it already
<stokachu> wallyworld: ok i got it worked out now
<stokachu> disregard last messages
<jamespage> oh lookling now then dosaboy
<jamespage> dosaboy, not sure that's quite what I mean't
 * jamespage looks some more
<jamespage> dosaboy, you still need get_network_addrs to deal with resolution of public address when multiple l3 nets are in use
<dosaboy> jamespage: otp, i'll take a closer look after, i might have rushed that one :(
<jamespage> dosaboy, you went to far!
<dosaboy> jamespage: oops now i see it ;)
<kwmonroe> lazyPower: what's the right way to deal with a layered charm that has no unit_tests dir? layer-basic has it in the Makefile, but if the consuming charm doesn't include a unit_tests dir, lint gives this "unit_tests:1:1: E902 IOError: [Errno 2] No such file or directory: 'unit_tests'"
<lazyPower> either provide a makefile that doesn't look for unit_test, or write unit_tests i suppose
<kwmonroe> kjackal just ran into that ^^  i see in one of your charms, you used an empty __init__.py.  i'm curious if that's the best way to do it.
<beisner> jamespage, your dashboard MP test hit bug 1546209 for wily
<mup> Bug #1546209: Wily: apache2 Address already in use: AH00072: make_sock: could not bind to address 0.0.0.0:70 <uosci> <openstack-dashboard (Juju Charms Collection):New> <https://launchpad.net/bugs/1546209>
<jamespage> beisner, I saw
<beisner> jamespage, ack.
<jamespage> I'll take a peek now
<gennadiy>  can somebody help me with ? "i pushed my bundle 5 hrs ago - but it have not updated in store yet - https://code.launchpad.net/~tads2015dataart/charms/bundles/tads2015-demo/bundle"
<jamespage> beisner, https://bugs.launchpad.net/charms/+source/neutron-gateway/+bug/1547122
<mup> Bug #1547122: xenial: nova-api-metadata not running post deployment <openstack> <xenial> <neutron-gateway (Juju Charms Collection):New> <https://launchpad.net/bugs/1547122>
<jamespage> nice
<beisner> jamespage, hah.  oh neat
<dosaboy> jamespage: sanity hopefully restored to l3 patches
 * jamespage peeks
<dosaboy> jamespage: i'm deploying now to test but let me know if that is what you were after
<jrwren> gennadiy: i'm looking into it.
<jrwren> gennadiy: bundle verification failed: ["cannot validate service \"restcomm\": configuration option \"sms_proxy\" not found in charm \"cs:~tads2015dataart/trusty/restcomm-4mesos-1\""]
<balloons> marcoceppi, how's the list looking today? I have down charms for haystack and parse.
<marcoceppi> what are those?
<balloons> no idea, it was just in the pad of old ideas.
<marcoceppi> balloons: we want layers
<marcoceppi> so we want a rails layer, php layer
<marcoceppi> to start
<balloons> ok, brillant. I'll add those
<balloons> what can we quantify as the deliverable. A published layer?
<marcoceppi> balloons: i'll ahve a few more
<marcoceppi> a layer that's published and used by one charm
<marcoceppi> typically to prove the layer is good the author will publish a charm using it
<balloons> ahh, makes sense
<jamespage> beisner, I think the wily/liberty failure is a restart race on apache2
<beisner> jamespage, yup looks like it
<metsuke> does juju support multi-user environments?  I know we can make users, but can they be segregated between environments?
<beisner> jamespage, trying to bind before the old proc has let go
<jamespage> beisner, I think so but I can't repro on the box
<jamespage> beisner, well anyway i've disable that test for now - I'd like to see if we get the same on xenial
<beisner> jamespage, clearly this is a lie :-)   Feb 13 04:14:33 ubuntu systemd[1]: Stopped LSB: Apache2 web server.    i'd suspect that bit in apache2 init scripts (validating that it has stopped).
<beisner> jamespage, ack thx
<gennadiy> 2jrwren thanks a lot
<pmatulis> i tried to 'create-model' and got this: "cmd supercommand.go:448 opening model: "controller-resource-group" config not specified" - what is that about?
<Razva> is there any way to tail a juju deployment via autopilot? I cannot really find any logs of the ingoing installation...
<pmatulis> something to do with the azure provider
<pmatulis> axw: do you know?
<pmatulis> this is the first time i hear of a parameter for azure called 'controller-resource-group'
<beisner> hey coreycb - any clue of topology, config, relations, etc., for the barbican + aodh thing?
<beisner> also, what u:os release combos should they be used for?
<coreycb> beisner, I think we still need charms for those 2
<coreycb> bug designate I believe would be ready for putting in bundles
<beisner> coreycb, oh ok.   /me ignores card until charms exist with amulet tests.
<coreycb> ddellav and I are just working through package updates so we can get the packages into main and I realized we don't have a great way to test them
<coreycb> beisner, ok
<cloudguru> Can anyone confirm the latest version of the layer/docker charm fully supports and has tested DOCKER_OPTS ?
<firl> lazyPower any updates on kubernetes and the charming progress?
<lazyPower> firl mbruzek and i are on a hangout right now debugging the etcd bug we thought we had squshed
<firl> haha ok
<lazyPower> if you want to join we can riff on some k8s
<mbruzek> firl: What parts are you interested in?
<firl> I have 20 minutes; donât mind getting on
<lazyPower> https://plus.google.com/hangouts/_/canonical.com/help-charm-helpers
<lazyPower> the more the merrier
<firl> requested
<beisner> zul, o-c-t merged
<beisner> zul, i get an error instance when i use raw instead of root-tar for the image format.
<beisner> it works with root-tar
<firl> celpa.firl@gmail.com
<lazyPower> icey - did you get a PR formed for consul?
<icey> lazyPower: I'll send you an MP today, I need to make 2 actually :)
<icey> fix the install, and then one to integrate with a consul-agent
<lazyPower> ok just curious - i went and looked. are you planning on proposing against upstream github or the ~zoology lp repo or?
<icey> I can do whichever you'd rather, where is the GH repo?
<lazyPower> mbruzek ^
<mbruzek> icey: https://github.com/mbruzek/consul-charm
<mbruzek> icey: that was written before I knew about layers
<icey> mbruzek: I know :) I'm going to send a couple of PRs that are fairly small :)
<icey> mbruzek: you already did the updates to change the release path
<icey> WHY ISN'T THAT ON THE CHARMSTORE!?
<mbruzek> icey: we are working on different stuff
<mbruzek> my bad
 * icey sadface
<mbruzek> icey: truth be told I want to rewrite it in reactive.
<mbruzek> I find those charms much easier to read and maintain.
<icey> awesome
<mbruzek> but you want what is in my github in the charmstore?
<mbruzek> icey: I am running the bundletests now if they work I will propose that as a new charmstore branch
<icey> mbruzek: the version currently in the charmstore cannot deploy so yes, that would be nice :)
<mbruzek> icey: well then *you* can review the changes to expedite the process
<icey> mbruzek: I can say they're nice but I'm community ;-)
<icey> and I think the one in the charmstore is namespaces under zoologists
<mbruzek> yeah that is the right place for it, I never got it proposed to ~charmers
<mbruzek> I don't want to until it is a layer
<mbruzek> but I can put it in the zoo
<icey> mbruzek: let me know where you'd like me to look at it and I'm happy to do so
<cory_fu> lazyPower: Hrm.  https://github.com/juju/charm-tools/pull/103 is missing a way to remove values from merged lists.
<cory_fu> I feel like we need an extension to or a generalization of the "deletes" functionality in layer.yaml: https://github.com/juju/charm-tools/blob/master/doc/source/build.md#layeryaml
<lazyPower> yeah i had made mention during my 1x1 that the PR was probably not a full fix for what we were looking at trying to do
<lazyPower> cory_fu lets back that out until its gotten a proper round of tests yeah?
<lazyPower> cory_fu would you prefer i submit an uncommit or do you want to peel it back?
<cory_fu> Hrm.  This merged into master and not road-to-2.0, so I'm guessing it won't go into the next release anyway?
<cory_fu> Or is that backwards?
<lazyPower> I'm not certain tbh
<cory_fu> What further testing are you wanting to do, though?
<lazyPower> find out what is actually happening to metadata.yaml thats causing the sum mismatch in the test
<cory_fu> I think it's pretty clear that the categories list is getting combined and the "databases" key is duped
<cory_fu> Due to it being present in both layers.  I'm actually ok with that, now that I've thought about it for a while, but we do need some way of removing list values set by lower layers if we're going to do merging
<lazyPower> ok, de-dupe by default right?
<axw> pmatulis: controller-resource-group is an internal thing. if it's complaining about that not being set, there's a bug - you shouldn't need to set it
<pmatulis> axw: ok, i opened a bug
<axw> pmatulis: thanks
#juju 2016-02-19
<metsuke> does juju 2.0 support multi-user environments?
<metsuke> with permissions?
<lazyPower>  there was a pretty huge charm helpers MP that fundamentally changes charmhelpers.core.host - but abstracts and introduces centos support.  https://code.launchpad.net/~dbuliga/charm-helpers/charm-helpers/+merge/285044 It looks good to me, but lacks unit tests. I would love to get another set of eyes on this
<lazyPower> if anyone has the time ^ :)
<marcoceppi> lazyPower: I looked over it, and it LGTM as far backwards compat
<marcoceppi> Denis showed it to me at the charmers summit
<lazyPower> nice
<lazyPower> i didnt want to merge it cart blanch without checking with the openstackers
<marcoceppi> lazyPower: I agree
<marcoceppi> have one of the openstack charmers review ti as well
<neiljerram> Hi all!
<lazyPower> o/ neiljerram
<neiljerram> Just wondering if anyone has seen a problem like this with the nova-compute charm:
<neiljerram> 2016-02-18 14:54:30 ERROR juju-log FATAL ERROR: Could not determine OpenStack codename for version 12.0.1
<neiljerram> 2016-02-18 14:54:30 ERROR juju.worker.uniter.operation runhook.go:107 hook "config-changed" failed: exit status 1
<jcastro> http://askubuntu.com/questions/727332/how-can-i-use-multiple-cinder-sources
<jcastro> here's one for you openstackers!
<jamespage> thedac, https://code.launchpad.net/~james-page/charms/trusty/openstack-dashboard/xenial-support/+merge/286508
<thedac> jamespage: thanks. I'll get that landed
<thedac> tinwood: I will make your suggested changes to my keystone MP. Thanks for taking a look.
<tinwood> thedac, np
<cory_fu> kwmonroe: https://github.com/juju-solutions/charms.reactive/blob/master/bin/charms.reactive.sh#L61
<QCC> fu ftw!
<cory_fu> This is also useful info on layers, and the different types of layers: http://bigdata.juju.solutions/2015-10-29-now-youre-charming-with-layers
<cory_fu> QCC: ^
<cory_fu> Subordinates are different than layers
<cory_fu> status-set maintenance "Downloading large file <foo>..."
<cory_fu> status-set active "Ready"
<cory_fu> "secondary-namenode unknown"
<jose> 'ello. anyone here who's got access to the juju ci jenkins?
<jose> tvansteenburgh seems to not be around atm
<lazyPower> jose whats up?
<jose> lazyPower: tests on lxc are hitting an error, port 37010 is already in use as a state port and it exits 1
<lazyPower> jose he's out on vacation next week too. so might b a minute if you wait for tim :)
<jose> uh oh
<lazyPower> jose - have a link for me to hand over as evidence?
<jose> lazyPower: http://juju-ci.vapour.ws:8080/job/charm-bundle-test-lxc/2625/console but will create a pastebin since those expire
<lazyPower> jose - also have you hit this multiple times or just now?
<jose> lazyPower: multiple times, one being in the charmers summit and he insta-fixed it
<marcoceppi> jose: I don't remember how to fix it, sadly
<jose> marcoceppi: no worries. just as an idea, maybe there's a bootstrapped env that hasn't been killed?
<marcoceppi> thta's the problem
<freyes> hi, is there a way to remove a queued action?
<jose> marcoceppi: welp, I ran another test and it's doing fine now :)
<jose> so... http://juju-ci.vapour.ws:8080/job/charm-bundle-test-lxc/2637/console
<jose> that test up there completed ok, and then the CI infrastructure decided to say 'exit 1'! and it failed because of nothing (I could see on the logs).
<jose> anything we know about this bug?
<jose> lazyPower: http://paste.ubuntu.com/15134043/ is the paste I promised :)
<ChrisHolcombe> this is prob a silly question but does leader election work in juju actions?
<jrwren> is-leader should be available.
<jrwren> leader election happens outside of any hooks and makes a status available and calls hooks when a new leader is elected
<ChrisHolcombe> jrwren, thanks
#juju 2016-02-21
<marcoceppi> stub: you around?
<marcoceppi> rick_h_: you around?
<rick_h_> marcoceppi: am now
<stub> marcoceppi: yo
#juju 2017-02-13
<Ankammarao> Hello Juju World!!!!!
<Ankammarao> i want to know , is there any command or process to remove or hide the old charm verions in the charm store
<kjackal> Good morning Juju world!
<kjackal> Ankammarao: I believe you can move the charm you have to an "non searchable" channel
<Ankammarao> kjackal, how do i move , is there any command or process
<kjackal> Ankammarao: you could try charm grant to remove access of a revision on a channel
<Ankammarao> kjackal, i have tried with the grant command by giving access to only one user , but other users alos able to see now
<kjackal> Ankammarao: Reading the help of charm grant it seems you need to use the --set flag to overwrite any already existing acls
<kjackal> like so: charm grant ~johndoe/wordpress --acl write --set fred,bob
<kjackal> Ankammarao: There is also the "charm revoke" you can try
<nobuto> hi, can somebody review this pull request of apache-php layer? https://github.com/juju-solutions/layer-apache-php/pull/4
<kjackal> Ankammarao: you can do a "charm revoke --help"
<anita_> Hi I want to revoke the grant for one charm for specific versions only
<anita_> when I tried with charm revoke, revoking happening charm wise
<anita_> not version wise
<Ankammarao> kjackal , i have tried with the revoke command , its removing or hiding complete charm , but here we want to hide only the older verions not entire charm
<kjackal> Ankammarao: so revoke will not work if you pass in the revision like this: "charm revoke ~johndoe/wordpress-4"  ?
<kjackal> Ankammarao: Then i think charm grant with the --set on the revision is the only option you have
<kjackal> Ankammarao: like so: charm grant ~johndoe/wordpress-4 --channel edge --acl write --set fred,bob
<Ankammarao> kjackal, so only fred,bob can see the charm revisions or version
<kjackal> Ankammarao: yeap, based on the documentation
<Hetfield> hi, i'm using LXD local cloud provider for juju
<Hetfield> i would like to enable controller HA by enable-ha command on 3 different machines, but always LXD.
<Hetfield> i don't know how to tell juju to use a distributed provider like that
<Hetfield> while using MAAS, instead, it's easy and working
<rick_h> Hetfield: that's not currently supported.
<rick_h> Hetfield: for the moment, juju only works on one machine with lxd at a time. There's work that's been talked about for making lxd more like a cloud and juju would be updated to do more what you're looking for
<Hetfield> rick_h: oh, good, this means i'm totally dumb :)
<rick_h> Hetfield: no, means you're smart for wanting it and we're working on it :)(
<Hetfield> rick_h: actually it would just mean to add more endpoints or model it as regions
<Hetfield> rick_h: any ETA?
<rick_h> Hetfield: now you're thinking. More like availability zones tbh
<rick_h> Hetfield: I think the lxd bits are looking to be in 16.04 and then juju would update in the next cycle
<Hetfield> rick_h: ok. but nowaways how to get HA for production usage (on premise)? only way is MAAS if i understand good
<rick_h> Hetfield: well maas, any public cloud, openstack, etc all support HA on different hardware like that
<rick_h> Hetfield: lxd is meant to be a local/one machine so HA on that isn't a thing. It's more the exception to the rule tbh
<Hetfield> rick_h: agree
<rick_h> Hetfield: maybe look at HA at the application level across different machines running lxd? e.g. if one machine goes boom the others can keep applications going and be brought back up?
<Hetfield> rick_h: i meant infra HA. i mean...if machine with juju-controller ( i will deploy openstack-charms) dies, what happens?
<rick_h> Hetfield: so you're running openstack on a single machine?
<Hetfield> i didn't want to add pacemaker or such cluster tools
<Hetfield> so, on several, let's say 10
<rick_h> Hetfield: sorry, I think I got confused. So you're going to run openstack then yes, we'd suggest you use maas to get that going
<rick_h> Hetfield: if you want to put the controllers in containers you need to manually setup VMs on the machines you want and add them to maas
<rick_h> Hetfield: then target those in bootstrap/enbale-ha with the --to/constraints arguments
<Hetfield> oh, yes, it was my solution
<rick_h> Hetfield: there's not really a suggested production way to do openstack with the lxd provider, more the maas provider and our default tools stick things in lxd containers and spread them across machines for resiliency
<Hetfield> but as i'm going to use lxd for openstack components (apart nova...) i wanted to add lxd juju controllers too
<rick_h> Hetfield: right, there's the chicken and egg problem there. To get around it you have to have the containers setup and in MAAS that looks like machines
<Hetfield> yes right
<Hetfield> it's very interesting
<Hetfield> rick_h: a sort of: how to compile gcc :)
<rick_h> :)
<kklimonda> is there documentation on creating local mirror of juju agent (and probably other tools) needed for bootstrap? I have a terrible internet connection where my juju is installed at.
<kklimonda> looks like I have sstream-mirror https://streams.canonical.com/juju/tools [...]
<kklimonda> but I'd rather avoid downloading 5.5GB with all the releases
<Zic> lazyPower: hi, if you have any things that I can beginning to test tomorrow morning without waiting for you (as we don't have so much time where we can talk, both of us, in a day with the local clock time) :)
<lazyPower> Zic - i just got a good build of the charm deployed in lxd (1 time)
<Zic> dat sync \o/
<anita_> Hi, How can I revoke one specific version of a charm?
<lazyPower> Zic - by end of day i'll have pinged you with a pr  and a personally published charm revision you can use for testing
<anita_> i tried revoke, but it is revoking charm wise
<lazyPower> we might push this in the edge channel i need to sync with the team
<lazyPower> Zic - however, the end game is we are on the right path to getting this fixed for your use case by end of week
<Zic> they don't call me today (I'm out of office), so I expect the cluster worked fine with the singlemaster without me :)
<lazyPower> Zic  fantastic :)
<lazyPower> thanks for pinging back and keeping up on this
<lazyPower> once i've got this in flight in kubes/kubes, i'll need your feedback on the PR and i'll need to rotate to another issue, however this should get yo moving for more testing/feedback of the HA master. There's likely to still be some gremlins in there as this is first-pass stuff
<lazyPower> well second-pass actually
<lazyPower> but i digress
<anita_> Hi, How can I revoke one specific version of a charm? i tried revoke, but it is revoking charm wise, all versions are revoking
<lazyPower> anita_ - you can change the published "tip" of that charm stream to a prior revision or the next revision in sequence
<lazyPower> anita_ charm release cs:~team/entity-version  --channel=stable   should move the stable revision where you're looking to point it.  same with other channels, just sub in the channel name
<anita_> lazyPower_:I didnt get
<lazyPower> anita_ - however i think the point is you dont revoke, its a moving "alias" pointing at the revision you want to represent the latest known good revision for that channel
<lazyPower> anita_ - as teh charm itself is all tracked linearly in the "Unpublished" stream. you're just pointing to different combinations of the charm version and potentially any resources, when you do the charm release. there' s a diagram for this in the docs under the charm store page in the dev guide
<anita_> like I want to grant 6th version where as i want to revoke 4 and 5
<lazyPower> anita_ https://jujucharms.com/docs/stable/authors-charm-store#entities-explained
<anita_> lazyPower_:Ok let me check
<Zic> lazyPower: feel free to PM me in IRC if instructions are too long to put it here :) I will begin test tomorrow at 09:00 UTC, I will have time to do some tests before you wake up and make a recap of what I observed ^^
<lazyPower> Zic - sounds good
<lazyPower> Zic - i'll put it somewhere thats simple like juju deploy cs:~lazypower/canonical-kubernetes --channel=edge  or somethign similar. the idea is to get it in your hands as fast as possible
<lazyPower> so i'll eat that pretext work, and you focus on the usability/feedback cycle
<lazyPower> when you're ready we'll fold you in on the buiding from source routine, its a little complex for first time juju developers... a lot of custom tooling in here
<Zic> my time is divided at work between CDK/Kube and learning Go this days... so I'm available for intensive tests if you need :)
<lazyPower> Zic but if i can get you knowledgeable in our stack, it would be great to add you as a reviewer of the CDK code as it comes up for landing in kubernetes/kubernetes
<lazyPower> not that i have ulterior motives here ;)
<Zic> I'm interested in it yeah, as I'm planning to write some juju charms in a near future
<Zic> (and all knowledge about Juju will be helpful, as one of my teamworker is planning to use Juju for an OpenStack cluster)
<anita_> lazyPower_: I want to revoke the read/write permission completely from those versions.
<lazyPower> anita_ - there is no revoke, you can change the head pointer
<anita_> oh ok
<lazyPower> the grant/revoke will sweepingly add permissions to an entire channel
<lazyPower> so thats why it appears like you've revoked the entire charm. Thats intended for doing early access isolation to a group of users for testing, eg: edge can only be deployed by dept. a and dept. b
<lazyPower> dept c - f will have to wait until it lands in candidate, as its too high risk for those models
<anita_> ok
<lazyPower> the whole point is risk assessment and subscription, the reason for no revoke is if someone deploys that revision and you revoke it, you've abandoned htem in the release chain
<Zic> I don't know if Juju's code is a great place to show "Go in action" for a very-beginner of Golang :> but one of the attribute of Go seems to be "you will be able to read other's code in a week"
<Zic> an attribute that, as a C developer, I was never completely able to, after some years of C, depending of the software-stack
<Zic> we'll see :D
<lazyPower> Zic - well I was also told ruby will eat teh world of software and we see how far that went
<Zic> hehe, I'm still confronted to Ruby as we heavily use Puppet here
<Zic> and for some hacks that cannot be in Puppet's language, we directly neeed Ruby's code :(
<lazyPower> Zic - i'm familiar with the syndrome
<lazyPower> thats one of the reasons why we went with reactive and pure python. DSL's are great when you *want* rails, but there's often times where you need to write up a funky little method to do something highly specific and it fits together better like that.
<Zic> lazyPower: I was told that JS will be everywhere, as Web Developers "contaminate" (with all my respect to web developers) the system-programming with NodeJS
<lazyPower> artisinal coding vs cargo-culting.
<Zic> and now, we see Rust, Go and other compiled language, with a strong approach to be system-programming
<Zic> so I'm happy :)
<lazyPower> Zic - yep, predictions are just that, marketing tools to guide you somewhere, despite the social dynamic and "social climate" being very different indicators of that statement.
<Zic> *execept* when I saw that Unity 8, and even GNOME project emphasis JS as the "de facto primary language" :'(
<lazyPower> well you never know ;)
<Zic> with Unity 8 using pure-QML, and GNOME with GJS
<lazyPower> js does make for an awesome wiring language to put together UI elements and give them behavior
<lazyPower> i'm kind of happy to see ECMASCRIPT getting some love
<lazyPower> but thats a very different beast than the devops we're working on
<Zic> in my company, most of our customer stills use PHP for backend, some of them used NodeJS
<Zic> some of *our* backend are in Go at contrary :)
<lazyPower> well with the incoming dockerpocalypse we'll see a lot more using the whole menagerie of backend tech because containers and their nature of disposable infra
<lazyPower> for quick POC style testing, and the longer-term applications that make the cut
<Zic> when I talk about LXD in our weekly meeting, the immediate question was "And can we run Docker in it ? And in what way it's different from Docker in Docker?"
<lazyPower> i woudln't be surprised if you find a stack of python, ruby, nodejs, go, php, and probably even some erlang if you're persnickety
<Zic> *sigh*
<lazyPower> Zic - we have some material on that which may help
<lazyPower> there's an inforgraphic for lxd vs docker
<lazyPower> which calls out the strengths of each given context
<lazyPower> Zic - something like this is a good way to start teh conversation https://insights.ubuntu.com/2015/09/23/infographic-lxd-machine-containers-from-ubuntu/
<Zic> yeah, but I know it will finish in "LXD is replacing our VMs, right, but... *Where can I run my Docker?*"
<lazyPower> Zic - the answer is "just about anywhere/everywhere"
<lazyPower> we've gone through great lengths to ensure docker works in all those substrates
<Zic> the one thing is at least, I will try to replace my local use of KVM/Qemu with LXD containers for my future test
<lazyPower> we want docker on ubuntu to just work. I think the only place left as a hold out is bash on windows, and who knows where that API is, i'm not working on the project so its hard to say.... but if i were to guess those system calls will eventually be translated and you'll see something fun in there.... but dont hold me to that, I have no idea if thats on the roadmap, its pure speculation.
<lazyPower> Zic  - man ;) I'm using lxd right now to test this HA patch
<Zic> :D
<lazyPower> the brilliant part of LXD is the capacity to model and test HA deployments locally
<lazyPower> you can *even* simulate network partitions
<lazyPower> with a simple profile edit on the lxd container
<lazyPower> want to see what etcd really does when you pull the plug on 2 units? no problem
<lazyPower> snip snp
<lazyPower> oh look 2 units died and the one unit is complaining it cant find quorem
<lazyPower> welp,  i guess 3 nodes is disaster, better scale to 5
<lazyPower> (how i validated the etcd documentation assertions)
<Zic> anyway, what push me to do some learning of Go was because Canonical's tools seems to use Go as primary language now (was Python few years back); before I was like "meh, another language... at least it's compiled, fine" :>
<lazyPower> lol i nkow you're excited about this
<lazyPower> you were all jazzed last week
<lazyPower> i'm sorry i dont share your enthusiasm about go :) I'm pretty lazer focused on our k8s roundup these days
<Zic> hehe :D
<Zic> I'm not a go-enthusiast for now, I'm still learning it as "secondary-class language", like I did with Ruby (was forced to, with Puppet)
<Zic> maybe it will become one of my main tooling language at future
<Zic> maybe. :)
<lazyPower> Zic - the world can only hope for so much awesomeness my friend
<Zic> for now, my main language are Bash, Python & C
<lazyPower> go forth and conquer
<Zic> and I didn't talk about Rust :D as I'm a pro-Firefox, I'm stalking Servo evolution and Rust language too :)
<Zic> Rust is not as easy to adopt as Go anyway, but seems to be a great language also
<lazyPower> i know the prime author behind it. I used to attend meetups with steve klabnick
<lazyPower> he's a pretty smart dude, so i have no doubt rust is the bees knees
<Zic> it's one of the reason I love C, it's nearly 45 years now iirc, it resists to the "hype and the dance of programming language"
<Zic> I love to saw one of my sandbox program of 1999 still running and that I'm able to edit it easily
<Zic> it's not the same when I found an old Java program of myself :p
<Zic> (yeah, I did Java... at school... *burp*)
<stormmore> howdy juju world!
<lazyPower> o/ stormmore
<lazyPower> Zic making the PR now, will ping you with instructions before i move to the next objective
<derekcat> Hey everyone, does anyone know where Juju stores ssh known_hosts? It keeps telling me to:  ssh-keygen -f "/tmp/ssh_known_hosts736182584" -R <IP address>  ...Which obviously doesn't work when the /tmp version disappears...
<lazyPower> Zic - if you want to tag and follow along - https://github.com/kubernetes/kubernetes/pull/41351
<CarlFK> how do I clean up machine 1?   juju status: http://paste.ubuntu.com/23990363/
<lazyPower> CarlFK juju remove-machine 1 --force
<CarlFK> lazyPower: thanks. one down.  now to figure out why  30 min later cnt5 is still "installing charm software"
<CarlFK> never mind.  need for it has evaporated.
<lazyPower> Zic - simple instruction is to deploy teh bundle via conjure-up as normal, when you're at the waiting/allocating screen: juju upgrade-charm kubernetes-master --switch cs:~lazypower/kubernetes-master --channel=edge
<lazyPower> Zic - that should replace your masters with the patched version for HA and you should be able to start testing from there. it's in the pipeline and ready for testing from my store copy of that charm. The remainder of the bundle was untouched by this change.
<marcoceppi> derekcat: I think that's on the controller? rick_h ^?
<derekcat> marcoceppi: Do you know where it might be on the controller?  I've tried find / -name known_hosts but nothing is found..
<derekcat> *as root
<bdx> derekcat: I don't think there should be a known_hosts file by default, should there be?
<bdx> juju just adds ssh keys to .ssh/authorized_keys for the ubuntu user to allow access
<derekcat> bdx: It's got one somewhere..  From the box that I administer Juju from, I can ssh ubuntu@<IP of machine added to Juju>, but when I try to juju ssh postgresql/14 it comes back with a host identification error: @    WARNING: REMOTE HOST IDENTIFICATION HAS CHANGED!     @
<marcoceppi> derekcat: is there one in ~/.local/share/juju/
<marcoceppi> ?
<derekcat> marcoceppi: jujuadminbox:~/.local/share/juju/ssh only has juju_id_rsa  juju_id_rsa.pub
<marcoceppi> derekcat: I've not been able to find this either. It might be worth asking on juju@lists.ubuntu.com
<derekcat> marcoceppi: Ahh dang..  I'll give that a shot.  Thank you!
<GMR-OB> Anyone in here who knows how to fix a Percona Cluster (JUJU CHARM)  when all nodes are in blocked status ?
<lazyPower> GMR-OB - existing deployment or fresh deploy?
<GMR-OB> existing Deployment happened after a RAM crash on a single Server
<GMR-OB> Box is back up but now Percona Cluster is  saying status blocked on all 3 nodes : juju giu shows all ok
<lazyPower> GMR-OB  is this part of an openstack deployment?
<teranet> yes it is sorry had to fix my nicname too LOL
<lazyPower> teranet np :) I was going to suggest poking in #openstack-charms, i would suspect that whatever is blocking you has come up in CI before
<teranet> ok will do thx
<lazyPower> and its likely someone there might have some advice, otherwise i'm happy to take stabs at helping you resolve with generic troubleshooting advice
<Zic> lazyPower: ack, I will try this tomorrow :)
<lazyPower> Zic - final note is its cs:~lazypower/kubernetes-master-11
<lazyPower> if you see it grab a diff revision, somethings wrong and you should specify revision 11 explicitly to the command. but the channels should "just work"
<stormmore> damn it getting a bad gateway error :-/
<lazyPower> stormmore - on cdk?
<stormmore> yeah but it is most likely my fault :-/
<lazyPower> stormmore - ive had some reports of 502 bad gateways on deployments recently and i haven't been able to reproduce
<lazyPower> if you can reproduce it reliably please file a bug, it might be racey
<lazyPower> which would explain why i'm not seeing it and others are
<stokachu> stormmore, you deploying on aws?
<stormmore> I just did a basic ingress controller but I suspect it is cause I got my container to redirect http to https
<stormmore> stokachu yeah I am
<stokachu> stormmore, what version of juju?
<stormmore> juju 2.0.3
<stokachu> hmm ok
#juju 2017-02-14
<stokachu> i see that 502 error sometimes too
<stokachu> hard to reproduce
<stormmore> could it be because I don't have dns setup correct?
<stormmore> yup that was the problem
<stormmore> now I have to figure out why my ingress isn't presenting the right wrong cert
<lazyPower> "the right wrong cert" - so many questions about this statement...
<stormmore> lazyPower well I am expecting an error due to DNS not being setup instead of a CA / self-signed issue
<stormmore> lazyPower do you have an example of a https ingress pulling it's cert from secrets collection to help me see where I went wrong?
<stormmore> for some reason my ingress is pulling a self-signed cert!!
<lazyPower> https://kubernetes.io/docs/user-guide/ingress/
<lazyPower> under the TLS header
<stormmore> yeah that is what I was using :-/
<lazyPower> https://github.com/kubernetes/ingress/blob/master/controllers/nginx/README.md#https
<lazyPower> are you attempting to configure this via configmap?
<lazyPower> just perchance? we have an open PR to impelement that feature, however its author is on vacation
<lazyPower> i may need to piggyback it in
<stormmore> no just did a kubectl create secret tls wildcard --key <key file> --cert <cert file>
<lazyPower> hmm ok
<stormmore> https://gist.github.com/cm-graham/51c866e87934b53daa64afa104a4f6b7 is my YAML
<lazyPower> stormmore - can you confirm the structure of the secret has the keys 'tls.key' and 'tls.crt'?
<stormmore> lazyPower - yeah that is one of things I checked
<lazyPower> honestly iv'e only ever tested this with self signed tls keys
<lazyPower> i'm not sur ei would have noticed a switch in keys
<stormmore> I am still curious as to were it got the cert it is serving it
<lazyPower> let me finish up collecting this deploy's crashdump report and i'll context switch to looking at this stormmore.
<lazyPower> i'm pretty sure the container generates a self signed cert that it will offer up by default
<lazyPower> and you're getting that snakeoil key
<stormmore> yeah that is what I am thinking
<lazyPower> like the ingress rule itself isn't binding to that secret
<lazyPower> so its falling back to default
<stormmore> one way to test that threory... going to go kill a container!
<stormmore> hmmm so the container needs to present that cert first?
<lazyPower> i dont think so
<lazyPower> stormmore ok sorry about that, now i'm waiting for a deploy of k8s core on gce.
<lazyPower> i'll pull this in and try to get a tls ingress workload running
<stormmore> lazyPower no worries, as always I don't wait for anyone :P
<lazyPower> good :)
<lazyPower> if you cant get this resolved i'll happily take a bug
<stormmore> lazyPower I am trying a different namespace to see if it is a namespace overload issue
<stormmore> OK definitely not an namespace issue and even deleting the deployment, service and ingress didn't stop it serving the right wrong cert
<stormmore> and I have confirmed that the container services the right cert if I run it locally
<lazyPower> you mean locally via minikube or via lxd deployment or?
<stormmore> even simpler ... docker run -p...
<lazyPower> ok, i woudl expect there to be different behavior between that and the k8s ingress controller
<lazyPower> it has to proxy tls to teh backend in that scenario + re-encrypt with whatever key its serving from configured in the LB
<stormmore> just rulling out the container 100%
<stormmore> it serves both tls and unsecure traffic at the moment so should be fine for lb/ingress at the moment
 * lazyPower nods
<stormmore> yeah I am running out of ideas on what to try next :-/
<stormmore> well short of destroying the cluster and starting again!
<stormmore> you should only need to setup the deployment, service and ingress right?
<lazyPower> stormmore correct
<stormmore> hmmm sigh :-/ still no serving the cert from the secret vault
<lazyPower> stormmore - this is whats incoming https://github.com/kubernetes/kubernetes/pull/40814/files
<lazyPower> and it has tls passthrough from the action to the reigstry running behind it, but its configured via configmap.
<lazyPower> so this branch is a pre-req to get the functionality but this is our first workload configured with tls that we have encapsulated as an action, which has the manifests
<stormmore> yeah I am monitoring that
<stormmore> I am also thinking nexus3 is potentially a better option for us as it gives us other repository types than docker registeries
<stormmore> also I am only using it at the moment as a test service
<stormmore> going to find an AWS region we aren't using an setup the VPC more appropriately for a cluster
<stormmore> going to head home and see if I can do that
<Budgie^Smore> lazyPower it's stormmore, do you know of a good guide for spinning up a k8s aws cluster including setting up the VPC appropreiately?
<abhay_> Hi Chris
<abhay_> are you there chris  ?
<veebers> abhay_: see pm
<abhay_> ok
<kjackal_> Good morning Juju World!
<kklimonda> how do I create bundle with local charms?
<kklimonda> (using juju 2.0.3)
<kklimonda> right now I have a bundle/ directory with bundle.yaml and charms/ inside, and I'm putting local charms into charms/)
<kklimonda> but when I try to use charm: ./charms/ceph-269 I get an error >>path "charms/ceph-269" can not be a relative path<<
<kklimonda> I guess I can workaround by passing an absolute path instead, but that's not what I should be doing
<kjackal_> Hi kklimonda, I am afraid  absolute path is the only option at the moment.
<SimonKLB> what is the best practice for exposing isolated charms (i.e. charms co-located in LXD containers) ?
<SimonKLB> right now i manually add iptables rules to port forward since expose doesnt seem to be enough, but perhaps there is a juju-way ?
<magicaltrout> that is the way SimonKLB
<magicaltrout> all expose does is if you have a provider that understands firewalls, to open the port
<magicaltrout> LXD obviously doesn't so expose does nothing
<SimonKLB> magicaltrout: right, do you know if there has been any discussion regarding this before? it would be neat if the expose command did the NAT:ing for us if the targeted application was in an LXD container
<SimonKLB> but perhaps this is not implemented by choice
<magicaltrout> SimonKLB: I've brought it up before, I don't think it ever really got anywhere. You could do this type of thing https://insights.ubuntu.com/2015/11/10/converting-eth0-to-br0-and-getting-all-your-lxc-or-lxd-onto-your-lan/
<magicaltrout> basically just bridge the virtual adapter through the host adapter and get real ip addresses
<magicaltrout> all depends on how much you can be bothered :)
<SimonKLB> magicaltrout: i wonder how well that's going to work on a public cloud though
<kklimonda> any idea how to debug juju MAAS integration? I can deploy juju controller just fine, but then I can't deploy my bundle - juju status shows all machines in a pending state, and maas logs show no request to create those machines
<magicaltrout> SimonKLB: indeed, it will suck on a public cloud :)
<magicaltrout> i reckon there is probably scope for a juju plugin someone could write to add that functionality though. Something like juju nat <application-name>
<magicaltrout> but i'm not that guy! ;)
<kklimonda> also, are there any chinese mirrors of streams.canonical.com?
<magicaltrout> hey kklimonda i'm not a maas user so I can't offer help but you could also #maas if you're not already there to see if other people are awake
<kklimonda> #juju seems to be more active
<magicaltrout> lol fair enough
<magicaltrout> it will certainly pick up later in the day when more US folks come online
<magicaltrout> kjackal_ might be able to provide some MAAS support, I'm not sure
<ayush> cholcombe: Hey
<ayush> cholcombe: I needed some help regarding the Ceph Dash charm
<ayush> help
<magicaltrout> ayush: you might find people awake on #openstack-charms
<ayush> Thanks. Will check there :)
<anrah> Hi! Can someone help with juju storage and cinder?
<anrah> My deployments are working fine to my private openstack cloud, but I would want to use cinder volumes on my instances for logfiles etc.
<anrah> I have https enabled on my openstack and I use ssl-hostname-verification: false
<anrah> Units get added without problem, but when I want to add storage to instances I get error https://myopenstack.example.com:8776/v2/b3fbae713741428ca81bca384e037540/volumes: x509: certificate signed by unknown authority
<kjackal_> kklimonda: I am not a maas user either, so apart from the usual juju logs I am not aware of any other debuging ...facilities
<kjackal_> anrah:  You might have better luck asking at #openstack-charms
<anrah> I'm not deploying openstack :)
<anrah> I'm deploying to OpenStack
<anrah> OpenStack as provider
<kklimonda> kjackal_: so it seems juju is spending some insane amount of time doing... something, most likely network related, before it starts orchestrating MAAS to prepare machines
<cory_fu> stub, tvansteenburgh: ping for charms.reactive sync
<kklimonda> this is a lab somewhere deep in china, and the connection to the outside world is just as bad as I've read about - it looks once juju finishes doing something it starts bringing nodes, one at a time
<magicaltrout> kklimonda: so a few things should happen, MAAS will check to make sure it has the correct images as far as I know, if it doesn't it'll download some new ones, likely Trusty and Xenial
<magicaltrout> then when juju spins up it will start downloading the juju client software and then do apt-get update etc
<kklimonda> for the maas images, I've created a local mirror with sstream-mirror and pointed MAAS to it
<kklimonda> it's definitely possible that juju is trying to download something else
<magicaltrout> yeah it will download the client, then when thats setup any resources it needs for the charms
<kklimonda> can I point it to alternate location?
<magicaltrout> not a clue, clearly you can run an apt mirror somewhere
<magicaltrout> i don't know how you point cloud init somewhere else though
 * magicaltrout taps in kjackal_ or an american at this point
<kklimonda> I don't think it's even getting to apt
<kjackal_> kklimonda: I am not sure either, sorry
<kklimonda> controller is deployed and fine, and machines are in pending state without any visible progress for at least 10 minutes (that's how long it took for juju to spawn first out of three machines)
<magicaltrout> juju status --format yaml might, or might not give you more to go on
<magicaltrout> rick_h loves a bit of MAAS when he gets into the office
 * rick_h looks up and goes "whodawhat?"
<kklimonda> yeah, I'll wait around ;)
<rick_h> kklimonda: what's up? /me reads back
<magicaltrout> he does love MAAS, don't let him tell you otherwise! ;)
<kklimonda> rick_h: I have a MAAS+Juju deployment somewhere in China, and juju add-machine takes ages
<rick_h> I do, my maas https://www.flickr.com/gp/7508761@N03/47B58Y
<kklimonda> my current assumption is that juju, before it even starts machine through MAAS, is trying to download something from the internet
<kklimonda> which is kinda no-go given how bad internet is there
<rick_h> kklimonda: probably pulling tools/etc from our DC perhaps. You might know more looking at the details logs and doing stuff like bootstrap --debug which will be more explicit
<rick_h> kklimonda: hmm, well it shouldn't do anything  before the machine starts
<rick_h> kklimonda: it's all setup as scripts to be run when the machine starts
<kklimonda> bootstrap part seems fine
<kklimonda> I've done a sstreams-mirror to get agent/2.0.3/juju-2.0.3-ubuntu-amd64.tgz and later bootstraped it like that: juju bootstrap --to juju-node.maas --debug --metadata-source  tools maas --no-gui --show-log
<rick_h> kklimonda: hmm, no add-machine should just be turning on the machine and installing jujud on the machine and register it to the controller
<magicaltrout> kklimonda: if you tell maas just to start a xenial image does it come up?
<kklimonda> I can deploy xenial and trusty just fine through UI and it takes no time at all (other than servers booting up etc.)
<kklimonda> but juju is not even assigning and deploying a machine until it finishes doing... whatever it's doing
<kklimonda> the funny part is, it seems to be working just fine - only with a huge delay (like 15 minutes per machine)
<rick_h> kklimonda: hmm, yea might be worth filing a bug with as many details as you can put down. what versions of maas/juju, how many spaces/subnets are setup, what types of machines they are, etc.
<kklimonda> sigh, it's definitely connecting to streams.canonical.com
<kklimonda> I just tcpdumped traffic
<magicaltrout> i blame canonical for not having a DC in the back of china!
<kklimonda> sigh, there are mirrors for old good apt repositories
<kklimonda> but we're living in a brave new world
<kklimonda> and apparently infrastructure has not yet caught up ;)
<magicaltrout> well you can boot of a stream mirror, I wonder why its not using your config
<magicaltrout> or is it a fall back. I'm unsure of how simple streams work, its like some black art stuff
<kklimonda> this part seems to be rather thinly documented
<ayush> Has anyone used the ceph dashboard chime?
<ayush> charm*
<marcoceppi> ayush: I have, a while ago
<ayush> Did you use it with the ceph chimes? Or can it be setup with a separate ceph cluster?
<marcoceppi> ayush: I used it with the ceph charms
<ayush> Okay.
<ayush> Which version of juju were you using?
<marcoceppi> ayush: ultimately because you need to run additional software on the ceph nodes to actually gather the insights
<marcoceppi> juju 2.0
<ayush> marcoceppi: I ran this. Could you tell me how to get the credentials? "juju config ceph-dash 'repository=deb https://username:password@private-ppa.launchpad.net/canonical-storage/admin-ceph/ubuntu xenial main'"
<marcoceppi> ayush: you'd have to chat with cholcombe or icey on that
<ayush> marcoceppi: Thanks :)
<kjackal_> cory_fu: I would like your help on the badge status PR. Let me know when you have 5 minutes to spare
<cory_fu> kjackal_: Ok, just finishing up another meeting.  Give me a couple of min
<Zic> hi here
<Zic> lazyPower: are you around?
<Zic> I have some problem with conjure-up canonical-kubernetes, two LXD machines for kubernetes-worker are staying in "pending"
<Zic> (and the charm associated are blocked in "waiting for machine" so)
<stokachu> Zic, yea im working to fix that now
<Zic> ah :}
<cory_fu> stub, tvansteenburgh: Thanks for the charmhelpers fix.  We once again have passing Travis in charms.reactive.  :)
<stub> \o/
<Zic> stokachu: do you have a manual workaround?
<Zic> http://paste.ubuntu.com/23995096/
<cory_fu> kjackal_: Ok, I'm in dbd
<kjackal_> cory_fu: going there now
<cholcombe> ayush, you have to be given access to that PPA
<cholcombe> ayush, seems you and icey have been in contact.  i'll move this discussion over to #openstack-charms
<stokachu> Zic, is this the snap version?
<Zic> stokachu: nope, but I think it was because I forgot to 'apt update' after the add-apt-repository for the conjure-up's PPA :)
<Zic> (I got the older version of conjure-up, with the new one from PPA it seems to be OK)
<Zic> lazyPower: are you awake? :)
<kklimonda> is there a juju way for handling NTP?
<lazyPower> kklimonda - juju deploy ntp
<lazyPower> Zic - heyo
<kklimonda> will it just deploy itself on each and every machine and keep track of new machines I add?
<Zic> lazyPower: my conjure-up deployed stale at "Waiting to retry KubeDNS deployment" at one of the 3 masters, don't know if it's normal: http://paste.ubuntu.com/23995418/
<kklimonda> ah,  I see
<Zic> the first deploy, I had a too many open files, I increased the fs.file-max via sysctl and do a second deployment :)
<kklimonda> I can create a relationship for other units that need ntp
<kklimonda> cool
<Zic> now it's just this silly "waiting" which block
<lazyPower> Zic - i saw that when i was testing before the adontactic was updated
<lazyPower> *addonTactic
<Zic> lazyPower: I don't know what can I do to unblock it, it's in this state from 30 minutes
<lazyPower> Zic - did you swap with the cs:~lazypower/kubernetes-master-11 charm?
<Zic> yep
<Zic> lazyPower: I saw the step "Waiting for crypto master key" (I think it's the one you added)
<lazyPower> Zic -yeah, thats correct
<Zic> but I have on of this kubernetes-master instances which stay in waiting about KubeDNS :/
<lazyPower> Zic - give me 1 moment i'm on a hangout helping debug a private registry probem
<lazyPower> i think i might have botched the build with teh wrong template
<lazyPower> i'll need to triple check
<Zic> :D
<Zic> lazyPower: just for info, I used the "juju upgrade-charm kubernetes-master --switch cs:~lazypower/kubernetes-master --channel=edge" command
<Zic> (I didn't see your update with cs:~lazypower/kubernetes-master the first time :D)
<Zic> (I didn't see your update with cs:~lazypower/kubernetes-master-11 the first time :D)
<Zic> but the return displayed with the --channel=edge was actually cs:~lazypower/kubernetes-master-11 so it seems OK
<lazyPower> Zic - running another build now, give me a sec to buid and re-test
<Zic> np :)
<Zic> FYI, my mth-k8stest-01 VM is about 8vCPU of 2GHz, 16GB of RAM and 50GB of disk (I saw that CPU and RAM was heavily used in my first attempt with 4vCPU/8GB of RAM)
<lazyPower> Zic - so far still waiting on kube-addons
<lazyPower> churning slowly on lxd but churning all the same
<lazyPower> (i have an underpowered vm running this deployment)
<Zic> lazyPower: yeah, it stale long in "Waiting for kube-system pods to deploy" (something like that) but this step pass OK
<lazyPower> Zic - if it doesnt pass by the first/secoond update-status message cycle its broken
<lazyPower> Zic - dollars to donuts its failing on the template's configmap
<lazyPower> Zic - kubectl get po --all-namespaces && kubectl describe po kube-dns-blahblah
<lazyPower> shoudl say something about error in manifest if its what i think it is
<Zic> yeah, I tried that, it's in Running
<lazyPower> plot thickens...
<lazyPower> mine turned up green
<lazyPower> wwaiting on 1 more master
<Zic> yep
<Zic> was long on the 2 others, but it finally came to green
<Zic> but one master is still waiting in kubernetes-master/1       waiting   idle       8        10.41.251.165   6443/tcp        Waiting to retry KubeDNS deployment
<Zic> oops, missed copy/paste
<lazyPower> Zic - but hte pod is there and green?
<Zic> yep
<Zic> don't know what it's waiting though :D
<lazyPower> likely a messaging status bug
<lazyPower> if the pod is up
<lazyPower> actually Zic
<lazyPower> restart that vm
<lazyPower> test the resiliency of the crypto key
<lazyPower> nothing like an opportunity to skip a step :)
<Zic> oki
<lazyPower> Zic https://www.evernote.com/l/AX7J_eiKOdNF94_eSoBqGZ3fjz-ZA8qQzAkB/image.png
<lazyPower> confirmation that its working for me locally. if you see different results, i'm highly interested in what was different
<lazyPower> Zic - and to be clear, i did the switch *before* the deployment completed
<lazyPower> i do need to add one final bit of logic ot the charms to update any existing deployments
<lazyPower> its not wiping the auth.setup state which it should be on upgrade-charm.
<Zic> lazyPower: yep, I do the --switch ~5s after conjure-up printed me to press Q to quit :)
<Zic> (as you said I need to switch when juju status print "allocating")
<lazyPower> yeah :) thats just to intercept before deployment and start with -11
<lazyPower> not test an upgrade path
<lazyPower> this was only tested as a fresh deploy, so the upgrade steps still need to be fleshed out but it should be a simple state sniff and change
<Zic> for now, it's stale at : kubernetes-master/1       waiting   idle   8        10.41.251.165   6443/tcp        Waiting for kube-system pods to start
<Zic> (after the reboot of the LXD machine)
<lazyPower> :\ boo
<lazyPower> ok, can i trouble you for 1 more fresh deploy?
<Zic> all kube-system pods are Running...
<Zic> yep
<lazyPower> thanks Zic  - sorry, not sure what happened there
<Zic> I'm here for ~1 hour more :p
<lazyPower> however did your crypto-key validation work?
<lazyPower> were you able ot verify all units had the same security key
<Zic> I don't test it as I thought this KubeDNS waiting error blocks the final steps of installation
<Zic> oh
<lazyPower> if the kubedns pod is running
<Zic> it switch to active o/
<Zic> just now
<lazyPower> did it?
<Zic> yep
<lazyPower> fantastic, it was indeed a status messaging bug
<lazyPower> looks like perhaps the fetch might have returned > 0
<lazyPower> not certain, but thats why the message says that, is its waiting for convergence of the dns container
<Zic> yeah, I just rebooted the LXD machine with this status message blocked
<Zic> take ~4minutes to switch to active/idle/green
<lazyPower> update-status runs every 5 minutes
<lazyPower> so that seems about right
<Zic> ok
<Zic> as "reboot" of LXD machine is too fast, I don't know if it's a good test for resilience
<Zic> if I poweroff instead, and wait about 5 minutes
<Zic> I need to find how to re-poweron an LXD machine :D
<Zic> it's my first-time-use of LXD :p
<lazyPower> Zic - lxc stop container-id
<lazyPower> lxc start container-id
<lazyPower> lxc list shows all of them
<lazyPower> Zic - did you snap install or apt install?
<lazyPower> just curious :)
<Zic> apt
<Zic> I just followed the homepage quickstart :)
<Zic> (as it was updated with conjure-up instruction and add-apt-repository)
<lazyPower> ok
<lazyPower> :) I'm running a full snap setup and its working beatifully
<lazyPower> not sure if you want to dip your toes in the snap space but its a potential for you
<Zic> this mth-k8stest-01 VM will stay for test I think
<lazyPower> as the snaps seem to move pretty quickly, and they auto-update
<Zic> so I can test snaps in it :)
<lazyPower> nice
<lazyPower> just make sure you purge your apt-packages related to juju before you go the snap route
<bdx> lazyPower: giving CDK another deploy in a few minutes, trying to get the exact steps documented to get deis running post CDK deploy
<Zic> for the test VM I can go through snap, for the real cluster, I have right to reinstall it a last time tomorrow morning :x
<Zic> so I'm not decided if I can use your edge patch directly to production
<Zic> or if I should wait that it go to master
<lazyPower> Zic - wait until its released with the charms
<Zic> (master branch)
<lazyPower> Zic - there's additional testing, this was just early validation
<lazyPower> Zic - as well as teh upgrade path needs to be coded (remember this is fresh deploy test vs an upgrade test)
<Zic> yeah, but as always, deadlines ruins my world: the last time I can reinstall the cluster is tomorrow morning, so I think I will just install the old (released) version with the bug with a singlemaster
<Zic> and when your upgrade will go to release, I will add two more master
<Zic> do you think it's the right path?
<Zic> or it's better to deploy directly with 3 masters on the old release, and poweroff two masters waiting your patch going prod?
<Zic> (it's just for the real cluster, I can do every tests I need/want on other VMs :))
<Guest91022> Hi This is regarding revoking few revisions of a charm. the revisions need to be revoked, first released to different channel and then tried to revoke those revisions. But revoking is happening revision wise.
<Guest91022> Sorry revoking is not happening revisions wise
<Guest91022> grant/revoke is happening for all revisions of the charm.
<Guest91022> please advice
<Zic> lazyPower: for the master part it seems ok for now
<Zic> lazyPower: but for the worker part, if I reboot one of the worker, the ingress-controller on it pass to "Unknown" and try to respawn on another node... and stay in Pending
<Zic> don't know if it's the normal behaviour with ingress-controller pod
<Zic>   5m  20s  22 {default-scheduler }   Warning  FailedScheduling pod (nginx-ingress-controller-3phbl) failed to fit in any node
<Zic> fit failure summary on nodes : PodFitsHostPorts (2)
<Zic> oh, as Ingress Controller listen on 80 it's logic that I got this error
<lazyPower> :()
<lazyPower> Zic - interesting
<lazyPower> so its trying ot migrtate an ingress controller to a unit thats already hosting one to satisfy the replica count
<Zic> yep
<lazyPower> it has requested an impossible scenario :P
<lazyPower> i love it
<Zic> but as it already have an Ingress which listen on *:80... it raise a PodFitsHostsPorts
<lazyPower> yet another reason to do worker pooling
<lazyPower> and have an ingress tier
<Zic> does StatefulSet have a kind of "max one pod of this type per node"?
<Zic> it's maybe one of possible solution :)
<lazyPower> we would need to investigate teh upstream addons and see if they woudl be keen on accepting that
<lazyPower> we dont modify any of the addon templates in order to keep that "vanilla kubernetes" label
<lazyPower> i think we do one thing, which is sniff arch
<lazyPower> but otherwise, its untained by any changes
<Zic> it's not a crash issue so, I'm kinda happy with it anyway :D
<lazyPower> sounds like its testing positively then?
<lazyPower> aside from that one oddball status message issue
<Zic> yep
<lazyPower> fantastic
<lazyPower> i'll get the upgrade steps included shortly and get this prepped for the next cut of the charms
<lazyPower> thanks for validating Zic
<bdx> lazyPower: http://paste.ubuntu.com/23996226/
<lazyPower> bdx - in a vip meeting, let me circle back to you afterwords
<lazyPower> ~ 40 minutes
<bdx> k, np
<lazyPower> <3 ty for being patient
<cory_fu> kjackal_: I know it's late and you should be EOD, but were you +1 to merging https://github.com/juju-solutions/layer-cwr/pull/71 with my fix from earlier?
<kjackal_> cory_fu: I did not have asuccessful run but went through the code and it was fine
<kjackal_> cory_fu: So yes, merge it!
<cory_fu> heh
<cory_fu> kwmonroe: You want to give it a poke?
<cory_fu> I ask because I'm trying to resolve the merge conflicts in my containerization branch and would like to get that resolved at the same time
<cory_fu> kjackal_: Also, one last thing.  Who was you said possibly had a fix for https://travis-ci.org/juju-solutions/layer-cwr/builds/201538658 ?
<cory_fu> *Who was it you said
<kjackal_> it was balloons!
<cory_fu> balloons: Help!  :)
<kwmonroe> yup cory_fu, do you have cwr charm released in the store?
<kjackal_> cory_fu: balloons: it is about releasing libcharmstore
<cory_fu> kwmonroe: What do you mean?  That PR branch?
<balloons> ohh, what did I do? :-)
<rick_h> kjackal_: libcharmstore? https://github.com/juju/theblues ?
<kwmonroe> yeah cory_fu.  is it built/pushed/released somewhere, or do i need to do that?
<cory_fu> balloons: kjackal_ says you might know how to fix our travis failure due to charm-tools failing to install
<kwmonroe> or cory_fu, do just want me to pause for 5 minutes, pretend like i read the code, and merge it?
<kjackal_> rick_h: cory_fu: balloons: its about this: https://github.com/juju/charm-tools/issues/303
<cory_fu> rick_h: libcharmstore seems to be just a wrapper around theblues at this point.  Not sure what it provides extra that charm-tools needs
<rick_h> cory_fu: ah ok cool
<cory_fu> kwmonroe: I can push that branch to the store, but I thought we didn't update the store until it was merged.  I'll push it to edge, tho
<kwmonroe> cory_fu: i would be most thankful for an edge rev
<balloons> use a snap? AFAICT, charm-tools wasn't built for trusty at any point.
<balloons> you could also migrate to xenial I guess
<bdx> where can I find documentation on bundles vs. charmstore, e.g. how do I push a bundle to the charmstore?
<rick_h> bdx: bundles should act just like charms.
<rick_h> bdx: they're both just zips that get pushed and released and such
<cory_fu> balloons: Travis doesn't offer xenial, AFAICT
<rick_h> bdx: you just have the directory for the readme/yaml file
<bdx> rick_h: http://paste.ubuntu.com/23996456/
<balloons> cory_fu, building charm-tools for trusty and publishing it seems the most straightforward
<bdx> rick_h: looks like it needed a README.md
<rick_h> bdx: otp, will have to look. must be something that's not made it think it's a bundle
<rick_h> bdx: ah ok
<rick_h> bdx: crappy error message there for just that :/
<bdx> rick_h: yeah, I'll file a bug there for clarity
<bdx> thanks
<rick_h> bdx: ty
<cory_fu> balloons, marcoceppi: I thought charm-tools was already available for trusty?
<cory_fu> balloons: I also can't find the snap for charm-tools
<marcoceppi> snap install charm
<marcoceppi> there's a broken dep for trusty
<cory_fu> marcoceppi: Yeah, I don't understand why the dep broke.  Also, will snaps even work inside Travis?
<marcoceppi> probably?
<cory_fu> kwmonroe: Ok, cs:~juju-solutions/cwr-46 is released to edge
<kwmonroe> gracias cory_fu
<cory_fu> kwmonroe: You probably want the bundle, too
<kwmonroe> nah
<kwmonroe> cory_fu: bundle grabs the latest
<kwmonroe> oh, der.. probably latest stable.  anyway, no biggie, i can wedge 46 into where it needs to go
<cory_fu> kwmonroe: I can release an edge bundle
<kwmonroe> too late cory_fu, i just deployed what i needed
<cory_fu> :)
<bdx> rick_h: so I was able to get my `charm push` command to succeed, now my bundle shows in the store https://imgur.com/a/w4sYr, but when I select "View", I see this ->  https://imgur.com/a/udu1s
<rick_h> bdx: what's the ACL on that?
<rick_h> bdx: hmm ok so that seems like it should work out.
<bdx> rick_h: unpublished, write for members of ~creativedrive
<bdx> I could try opening it up to everyone and see if it make a difference
<bdx> I was able to deploy it from the cli ...
<rick_h> bdx: well you should be allowed to look at it like that w/o opening it up
<rick_h> bdx: right, you're logged in, first question would be a logout/login
<cory_fu> balloons: Is there a ppa for current stable snap?  Getting this: ZOE ERROR (from /usr/lib/snap/snap): zoeParseOptions: unknown option (--classic)
<cory_fu> balloons: Also of note, our .travis.yml requests xenial but still gets trusty
<bdx> rick_h: yeah .. login/logout did not fix
<rick_h> bdx: k, that sounds like a bug, the output of the ACL from the charm command would be good and I'll try to find a sec to sedtup a bundle and walk through it myself
<catbus1> Hi, I used conjure-up to deploy openstack with novakvm on a maas cluster. after it's up and running, I try to ssh to the instance via external port, but I can't. I am checking the switch configurations now (to make sure there is no vlan separating the traffic), but wanted to check here to see if there is any known issue here. I added the external port on maas/conjure-up node to the conjureup0 bridge via brctl.
<catbus1> I did specify the external port on the neutron-gateway.
<bdx> rick_h: sweet. thx
<balloons> cory_fu, ppa for a stable snap?
<balloons> that's a confusing statement
<balloons> cory_fu, yea, travis might keep you stuck. But again, fix the depends for trusty or ...
<bdx> can storage be specified via bundle?
<cory_fu> balloons: PPA to get the latest stable snapd on trusty.  Version that installs doens't support --classic
<balloons> cory_fu, heh. Even edge ppa doesn't supply for trusty; https://launchpad.net/~snappy-dev/+archive/ubuntu/edge
<balloons> cory_fu, but are you sure it doesn't work? I know I've used classic snaps on trusty
<balloons> you probably just need to use backports
<cory_fu> balloons: backports.  That sounds promising.
<balloons> cory_fu, actually, no.. http://packages.ubuntu.com/trusty-updates/devel/snapd
<balloons> that should work
<cory_fu> balloons: How do I tell it to use trusty-updates?
<cory_fu> Ah, -t
<cory_fu> Have to run an errand.  Hopefully that will work
<cory_fu> balloons: That didn't work.  :(  https://travis-ci.org/juju-solutions/layer-cwr/builds/201634553
<cory_fu> Anyway, got to run.
<cory_fu> bbiab
<bdx> lazyPower: just documenting the next phase of the issue http://paste.ubuntu.com/23996697/
<lazyPower> bdx - self signed cert issue at first glance
<lazyPower> just got back from meetings / finishing lunch
<marcoceppi> cory_fu: I'll have a dep fix tomorrow
<bdx> lazyPower: yea, entirely, just not sure how deis it is ever expected to work if we can't specify our own key/cert :-(
<magicaltrout> lunch is banned until stuff works!
<lazyPower> bdx - spinning up a cluster now
<lazyPower> give me 10 to run the deploy and i'll run down the gsg of deis workflow, see if i can identify the weakness here
<lazyPower> bdx - i imagine this can be resolved by pulling in teh CA from k8s and adding it to your chain, which is not uncommon for self signed ssl activities
<bdx> lazyPower: oooh, I hadn't thought of that
<lazyPower> rick_h - do you happen to know if storage made it into the bundle spec?
<lazyPower> or is that strictly a pool/post-deployment supported op
<kwmonroe> cory_fu: you badge real good:  http://juju.does-it.net:5001/charm_openjdk_in_cs__kwmonroe_bundle_java_devenv/build-badge.svg
<rick_h> lazyPower: yes https://github.com/juju/charm/blob/v6-unstable/bundledata.go check storage
<lazyPower> aha fantastic
<rick_h> lazyPower: it can't create.pools but can use them I believe via constraints and such
<bdx> rick_h: awesome thanks
<bdx> rick_h: any docs around that?
<rick_h> bdx: I think it's under documented.
<rick_h> bdx: sorry, in line at the kid's school ATM so phoning it in (to IRC)
<lazyPower> i'm filing a bug for this atm rick_h - i gotchoo
<lazyPower> bdx - nope sadly we're behind on that one. However https://github.com/juju/docs/issues/1655 is being edited and will yield more data as we get it
<bdx> lazyPower: awesome, thx
<lutostag> blackboxsw: fginther: we should totally set up a call for figuring out best path to merge https://github.com/juju/autopilot-log-collector & https://github.com/juju/juju-crashdump -- talking use-cases and merging
<blackboxsw> lutostag, seems like  log-collector might be a potential consumer/customer of crashdump as it tries to pull juju logs and other system logs together into a single tarfile
<blackboxsw> ahh extra_dir might be what we need there
<lutostag> blackboxsw: yeah, I was curious about your inner-model stuff in particular to make sure we could do that for you too
<stormmore> hey lazyPower I am still trying to understand ingress controllers... is there a way of load balancing floating IPs using them?
<lutostag> blackboxsw: and your use of ps_mem as well (the motivation behind that and what it gives you)
<fginther> lutostag, it is a bit tuned to our CI log analysis, but would not be impossible to merge things
<lutostag> fginther: yeah, OIL has a similar log analysis bit I'm sure, and merging heads seems like a good idea -- get everybody on one crash-collection format, then standardize analysis tools on top of it
<lazyPower> stormmore - actually yes, you can create ingress routes that point at things that aren't even in your cluster
<lazyPower> stormmore - but it sounds more like you're looking specifically towards floating ip management?
<stormmore> lazyPower - just trying to efficiently utilize my public IP space while providing the same functionality of using the service type loadBalancer gives in AWS
<lazyPower> stormmore - so in CDK today - every worker node acts as an ingress point. Every single worker is effectively an ELB style router
<fginther> lutostag, yeah, no objections to going that route. Would hopefully make things easier in the future
<lazyPower> stormmore - you use ingress to slice that load balancer up and serve up the containers on proper domains. I would think any floating IP assignment you're looking to utilize there would probably be best served at being pointed directly at the units you're looking to make your ingress tier (this will tie in nicely to future work with support for worker pooling via annotations, but more on this later)
<lutostag> fginther: since we already have 2 ci-teams doing it, we should smash em together so that other's don't have to re-invent, wish I had reached out to you guys earlier tbh :/
<stormmore> lazyPower yeah I saw that, I am more looking at providing service IP / VIPs instead
<lazyPower> ok for doing virtual ips, i'll need to do some reading
<lazyPower> stormmore - i've only tested nodport for that level of integration, there's more to be done there for VIP support i'm pretty sure
<stormmore> lazyPower the problem with using the node's IP is what happens to the IP if the node goes down
<lazyPower> right and that varies on DC
<lazyPower> it could stay the same, could get reassigned
<stormmore> exactly... my idea right now, is figure out how to run keepalived in the environment on one of the interfaces
<lazyPower> stormmore - the thing is, when you declare a service in k8s you're getting a VIP
<lazyPower> but its not one that would be routable outside of the cluster
<lazyPower> https://kubernetes.io/docs/user-guide/services/#ips-and-vips
<stormmore> have keepalived as a DaemonSet and then be able to assign an IP to the internal service IP
<lazyPower> ok, that sounds reasonable
<stormmore> seems the logical way of separating out the actual used IPs from the infrastructure 100%
<lazyPower> yeah i see what you mean. i was looking at this
<lazyPower> https://kubernetes.io/docs/tutorials/stateless-application/expose-external-ip-address/
<lazyPower> i presume this is more along the lines of what you were looking for, with --type=loadbalancer
<lazyPower> where its just requesting an haproxy from the cloud at great expense, to act as an ingress point to the cluster VIP of that service
<stormmore> yeah I have been looking at that from an AWS standpoint and allows kubernetes to setup the ELB
<lutostag> fginther: I'll spend a little time, making juju-crashdump more library-like, and import able, then maybe I'll setup a call for early next week to discuss what you guys need in terms of output format to minimize disruption to your analysis if that's ok
<fginther> lutostag, a call next week would be fine... But please note that some of the content of autopilot-log-collector is no longer needed
<fginther> all of the juju 1 content can be removed
<fginther> lutostag, I wouldn't want you to implement a lot of changes for the sake of autopilot-log-collector and have them not be used
<cory_fu> balloons, marcoceppi: It turns out I was installing "snap" when I should have been installing "snapd".  Unfortunately, it seems that snaps do not in fact work in Travis: https://travis-ci.org/juju-solutions/layer-cwr/builds/201647356
<balloons> cory_fu, ahh, whoops
<balloons> it's possible to ignore that, but not with our package
<lutostag> fginther: ok, sure, we'll make a list!
<cory_fu> balloons: What do you mean, "not with our package"?
<balloons> cory_fu, snapd can be built with selinux / apparmor or perhaps no security model. not sure. But the ubuntu package absolutely wants app armor
<lazyPower> kwmonroe - what bundle is that svg from?
<lazyPower> charm_openjdk_in_cs__kwmonroe_bundle_java_devenv <-- kinda tells me but kinda doesnt
<cory_fu> lazyPower: The bundle would be cs:~kwmonroe/bundle/java-devenv
<lazyPower> oh i guess its this https://jujucharms.com/u/kwmonroe/java-devenv/
<lazyPower> ninja'd
<lazyPower> watching matt deploy some ci goodness
<kwmonroe> well if matt can do it, we're in great shape.
<lazyPower> bdx - just got deis up and running, stepping into where you found problems i think
<lazyPower> "register a user and deploy an app" right?
<cholcombe> so i finally got around trying lxd on my localhost.  My lxc containers are started with juju and they seem to be stuck in 'allocating'.  I'm not sure why
<lazyPower> bdx - can you confirm your deis router is currently pending and not started?
<lazyPower> bdx - looks like a collision between the ingress controller we're launching and the deis router.
<lutostag> cholcombe: if you do a lxc list, do you see any "juju" machines there?
<cholcombe> lutostag: yeah they're def running
<cholcombe> i deployed 3 machines and i see 4 lxc containers with juju- in the name
<lazyPower> bdx yeah i was able to get the router scheduled by disabling ingress, we made some assumptions there that would prevent deis from deploying cleanly, both are attempting to bind to host port 80
<lutostag> cholcombe: I would try lxc exec <juju-...> bash # and then run top in there and see where it is stuck
<cholcombe> lutostag: ok
<lutostag> cholcombe: at one point there was an issue with things not apt-upgrading appropriately and getting stuck there indefinitely
<cholcombe> lutostag: ahh interesting
<cholcombe> lutostag: i don't see much going on.  let me check another one
<lutostag> but that was months ago, still going in and poking is the best way to find it
<cholcombe> looks like everything is snoozing
<lutostag> cholcombe: hmm, if they are all still allocating, mind pasting one of the containers "ps aux" to paste.ubuntu.com
<cholcombe> lutostag: sure one sec
<lutostag> if no luck there, we'll have to see what the juju controller says...
<lazyPower> bdx - i see the error between your approach and what its actually doing
<lazyPower> bdx - you dont contact the kube apiserver for this, its attempting to request a service of type loadbalancer to proxy into the cluster and give you all that deis joy
<cholcombe> lutostag: http://paste.ubuntu.com/23997274/
<lazyPower> bdx - i'd need to pull in the helm charts and give it a bit more of a high-touch to make it work as is, in the cluster right now. It would have to use type nodeport networking, and we would need to expose some additional ports
<lazyPower> Budgie^Smore - cc'ing you on this as well ^    if your VIP work-aroudn works, i'd like to discuss teh pattern a little more and dissect how you went about doing it. as it seems like there's a lot of useful applications for that.
<cholcombe> lutostag: the last message i see in the unit logs are that it downloaded and verified my charm
<lutostag> cholcombe: yeah, I can't see anything popping out at me, nothing helpful in "juju debug-log" ?
<cholcombe> lutostag: no just messages about leadership renewal
<lutostag> cholcombe: hmm, looks like maybe you are stuck in the exec-start.sh. You could try rebooting one of those containers
<lutostag> fighting another juju/lxd issue here myself, and a bit out of my depth, wish I could be more helpful
<bdx> lazyPower: ok, that would make sense, awesome
<lazyPower> bdx - however as it stands, this doesn't work in an obvious fashion right away. not sure when i'll have time to get to those chart edits
<cholcombe> lutostag: no worries
<bdx> lazyPower: what are my options here? am I SOL?
<bdx> until you have time to dive in
<lazyPower> bdx  not at all, you can pull down those helm charts and change the controller to be type: hostnetwork or type: nodeport
<bdx> oooh
<lazyPower> then just reschedule the controller
<lazyPower> helm apply mything.yml
<lazyPower> its devops baby
<lazyPower> we have the technology :D
<bdx> nice
<bdx> I'll head down that path and see what gives
<bdx> lazyPower: as always, thanks
<lazyPower> bdx - c'mon man :) We're family by now
<bdx> :)
 * Budgie^Smore (stormmore) had to come home since his glasses broke :-/ 
<lazyPower> Budgie^Smore - i've been there, done that. literally this last week
<andrew-ii> On one of my machines, LXD containers do not seem to deploy. No obvious error, machine (i.e. 0/lxd/0) just never leaves the Pending status, and the agent is stuck on "allocating". Am I missing something obvious?
<andrew-ii> Oh, and occasionally another machine doesn't deploy containers as well. Is there a point where something can cause LXD container deployments to fail without error/retry?
<Budgie^Smore> lazyPower ouch! just sucks just how short sighted I am :-/
<lazyPower> andrew-ii - which version of juju?
<andrew-ii> 2.0.2
<andrew-ii> Running on MAAS 2.1.3
<andrew-ii> Machine seems to be connected and healthy, and an app will deploy, just not to a container
<andrew-ii> cat /var/log/lxd/lxd.log is just like my other machines, except it doesn't have the "alias=xenial ....", "ephemeral=false...", or "action=start ...." lines
<Budgie^Smore> to make matters worse, I have 4k displays
<lazyPower> Budgie^Smore - so i just confirmed the reigstry action's tls certs work as expected, no sure if you were still on that blocker
<lazyPower> but i can help you decompose a workload using this as a template to reproduce for additional tls terminated apps
<lazyPower> i deployed one using letsencrypt certs and it seems to have gone without an issue
<Budgie^Smore> yeah that is a blocker still, got side tracked with another blocker anyway and looking at intergrating with the ELB for now
<Budgie^Smore> with the let's encrypt method are you manually uploading the key and cert to the k8s secrets vault?
<Budgie^Smore> I am also trying to build a cluster in a box so I can test scenarios better than right now
<lazyPower> yeah
<lazyPower> Budgie^Smore - its encapsulated in the action, its taking teh base64 encoded certs and stuffing them in the secret template and enlisting it
<Budgie^Smore> hmmm so if I go to the dashboard and look at the secrets should I see the base64 or the ascii?
<lazyPower> base64
#juju 2017-02-15
<Budgie^Smore> hmm rookie mistake time? :-/
<lazyPower> Budgie^Smore - i've lied to you
<lazyPower> its non base64 encoded when looking at the secret via the webui
<Budgie^Smore> OK *phew* don't scare me like that :P
<Budgie^Smore> then it doesn't make sense why it is serving the wrong cert even after removing and readding the service :-/
<Budgie^Smore> not to mention removing and readding the secret, simplifying the secret name, etc.
<lazyPower> Budgie^Smore - when you're in a better position to debug, lets try to get you some TLC and get you unblocked.
<lazyPower> i'm going be sticking around late tomorrow by about an hour or so, and i'm happy to help debug/reproduce then
<lazyPower> think you'll have the time for that Budgie^Smore?
<Budgie^Smore> that works, I am going to try and getting my virtualized environment up today
<lazyPower> ok, that sounds great then
<lazyPower> i'll see you tomorrow evening :)
<Budgie^Smore> first step in solving a problem is figuring out if you have a problem at all, better to do that from a simplified cluster to rule out complexities
<Budgie^Smore> I need to do it anyway since I want to be able to basically make a master node made up of VM (for now) that I can just spin up worker nodes... call me crazy but I want to containerize everything I can :P
<Budgie^Smore> I would probably constainerize kubernetes masters if I could :P but that is totally just crazy talk
<lazyPower> Budgie^Smore why do you think thats crazy talk? they are really gung-ho for self hosted at the developer watering holes
<lazyPower> i'm not sold on the idea myself, but if its done well i can see why its attractive
<Budgie^Smore> OK not sure crazy talk then
<lazyPower> nah  just mostly crazy if you forget the triple-o work back in teh day and how big of a sideshow that was
<lazyPower> not that it wasn't a feat of engineering, but that it was clunky and not really intuitive
<lazyPower> ok i'm going to shut up now i feel like i'm dissing someone elses work...
 * lazyPower checks out for the evening
<Budgie^Smore> it is all about making sure that the services are all hardened correctly, I am not totally sold on the idea of the masters but everything else yes
<Budgie^Smore> giving my laptop a real workout tonight, backing up a couple of VMs to USB and then I am going to do system reset and image it so I can wipe the hard drive and put Ubuntu on it!
<kjackal> Good morning Juju world!
<kjackal> Hi nobuto do you have a charm that uses the apache-php layer?
<nobuto> kjackal: not really. but I demoed how to write a new charm in front of customers along with the official doc (https://jujucharms.com/docs/stable/developer-layer-example) it never worked with xenial.
<nobuto> that's why I made a pull request to the layer.
<kjackal> nobuto: I see. thank you for the PR. I am testing it right now with a dummy charm. Hopefully I am going to merge it. Lets see!
<nobuto> kjackal: thanks
<kjackal> nobuto: do you have a valid apache.yaml ?
<kjackal> nobuto: I am getting this exception: http://pastebin.ubuntu.com/24000015/ Could be because the apache.yaml I found on the web is not valid
<pranav__> Hey. Can we configure Juju 2.0 with maas server?
<kjackal> pranav__: yes
<kjackal> pranav__: https://jujucharms.com/docs/2.0/clouds-maas
<nobuto> kjackal: let mee prepare apache.yaml for you to test.
<pranav__> @kjackal many thanks!
<chitteti> Hi juju World!!!
<chitteti> charm release giving command giving error "ERROR cannot release charm or bundle: unauthorized: access denied for user"
<chitteti> but able run the push command and its giving revision too
<chitteti> url: cs:~ibmcharmers/xenial/ibm-db2-2 channel: unpublished
<nobuto> kjackal: how about this? https://github.com/nobuto-m/layer-vanilla/blob/xenial/apache.yaml
<kjackal> nobuto: I got another exception this time http://pastebin.ubuntu.com/24000102/
<nobuto> kjackal: I used example vanilla layer. so checksum might be different. let me check.
<kjackal> But this is past the patch your are proposing, so i will merge the PR. However, this layer seems rather old, and you might not want to use it for demos
<nobuto> kjackal: right. vanilla layer needs to be updated to use new apache.yaml syntax: https://github.com/nobuto-m/layer-vanilla/blob/xenial/apache.yaml
<nobuto> kjackal: anyway our new charm developing story in jujucharms.com might need a face lift.
<Guest34515> channel wise charm revoke command is not working. how can we fix that?
<kklimonda> how do I create services in bundle so more than one can be installed on a single machine?
<kklimonda> right now, it seems to be ignoring "to" field, and spawn new machines for services
<Guest34515> channel wise charm revoke command is not working. Please advice
<marcoceppi> kklimonda: the to field should do it. could you share your bundle?
<marcoceppi> Guest34515: are you getting an error message? Could you describe what's not working?
<kklimonda> @marcoceppi: ah, I see - it seems if I add a new service to the bundle file, "to" is ignored
<kklimonda> (add a new service and deploy again without cleaning the environment)
<stokachu> kklimonda, is this with conjure-up?
<kklimonda> no, pure juju 2.0
<stokachu> ok
<kklimonda> perhaps there is a different way to incrementally work on bundles?
<kklimonda> so I can write them one application at a time
<Guest26106> how to revoke specific revisions of a charm in charm store?
<Guest26106> how to revoke specific revisions of a charm in charm store?
<Guest26106> please help
<marcoceppi> Guest26106:  are you getting an error message? Could you describe what's not working?
<bildz> good morning
<bildz> I am wanting to bootstrap juju to perform a canvas install of openstack, not through autopilot
<bildz> the issue I am having is the internal network created by the lxd bridge on the bootstrapped juju machine
<bildz> 10.0.0.0/24
<marcoceppi> bildz: does that collide with your current network?
<bildz> no its NAT'ed and preventing root based containers from getting a routable IP
<bildz> I need the LXD bridge to use the correct DHCP, instead of the default bridge network
<bildz> \o/
<h00pz> ok guys how the heck do we reconfigure lxdbr0 to use my existing network (and dhcp) when deploying an app instead of going rogue and uising it own defined and useless 10.x.x.x network
<bildz> marcoceppi: h00pz and I work next to each other
<marcoceppi> bildz h00pz is this on maas?
<h00pz> the hosts are deployed by maas but avoiding using autopilot as it sucks ass at placement
<h00pz> we stood up a standalone juju controller and âjuju add-machine ssh:<host>â for all the computes
<h00pz> then we added the openstack env and created the lxd containers for the various services but when it came to the lxd networking it went and used 10.x.x.x
<h00pz> we would like to know how to change that lxd networking to use the same network and dhcp as the hosts they will be on
<h00pz> marcoceppi: any idea how to hack the lxd bridges ?
<kjackal> kwmonroe: cory_fu: and easy to merge update on the readme https://github.com/juju-solutions/layer-cwr/pull/86
<bdx> I posted this in #netfilter and #lxdcontainers, possibly someone here has some insight ...
<bdx> having some issues getting packets through to lxd containers via iptables nat, wondering if someone might shed some light on my attempt to nat from host to container
<bdx> I'm applying a prerouting rule on my external interface in order to nat through the host to a lxd container on lxdbr0, the prerouting rule is "iptables -t nat -A PREROUTING -i ens3 -p tcp --dport 6379 -j DNAT --to 10.0.0.160:6379"
<bdx> the packets don't seem to be making it through to the container though.... I'm wondering if there are any tricks of the trade I'm missing here?
<bdx> trying to introspect the ufw rules docker creates in an attempt to recreate
<kklimonda> bbcmicrocomputer: which contrail charms on jujucharms.com are most up to date?
<kklimonda> https://api.jujucharms.com/charmstore/v5/cassandra-29/archive/config.yaml - why is install_sources type "string" when the default value is actually a list?
<bbcmicrocomputer> kklimonda: https://jujucharms.com/u/sdn-charmers/
<kklimonda> thanks
<bbcmicrocomputer> kklimonda: bundles are here - http://bazaar.launchpad.net/~sdn-charmers/+junk/contrail-deployer/files/head:/bundles/
<bbcmicrocomputer> kklimonda: works best with Contrail 3.2/3.1 commercial packages from Juniper
<kklimonda> have you looked into dpdk vrouter?
<bbcmicrocomputer> (require license)
<bbcmicrocomputer> kklimonda: these charms don't support dpdk
<bbcmicrocomputer> Contrail 4.x charms from Juniper should do (due April)
<kklimonda> sigh, I need R3.1 with dpdk and juju - fortunately I'm familiar with contrail itself, so I just have to unknowns
<cholcombe> anastasiamac: upgrading to juju 2.1-rc1 to fix the bug: https://bugs.launchpad.net/juju/+bug/1605241 causes juju to no longer be able to bootstrap a localhost environment.  It just says:cloud localhost not found
<mup> Bug #1605241: lxd instances not starting <conjure> <juju:Fix Committed> <https://launchpad.net/bugs/1605241>
<bdx> is there a method in charmhelpers that returns the primary network interface?
<bdx> ahh, just found it - charmhelpers.contrib.network.ip.get_iface_addr()
<bdx> even better - charmhelpers.contrib.network.ip.get_iface_from_addr(addr), nice
<lazyPower> Zic - you around?
<rick_h> Juju Show #6 in 14min
<rick_h> get your popcorn
 * jrwren fetches popcorn
<rick_h> thedac: arosales marcoceppi lazyPower and anyone else I'm missing the HO url will be https://hangouts.google.com/hangouts/_/ytl/jtR7zxxKKNe2lyJ_QtFXBACCGGaZLppiaK6hWnMYUyI=?eid=103184405956510785630
<rick_h> and the viewing url is: https://www.youtube.com/watch?v=K-cWDvM2zts
 * thedac nods
<arosales> rick_h: ack omw
<thedac> rick_h: I am getting you do not have access to this page
<rick_h> thedac: https://hangouts.google.com/hangouts/_/ytl/jtR7zxxKKNe2lyJ_QtFXBACCGGaZLppiaK6hWnMYUyI=?eid=103184405956510785630&hl=en_US&authuser=0 is my full url
<rick_h> thedac: last time folks had to remove the hl and authuser keywords, maybe try setting up authuser to the right one for your account
<thedac> ok
 * rick_h also manually invites you in via email
<thedac> thanks
<magicaltrout> doesn't modern technology suck?! ;)
<rick_h> 5 minute warning
<rick_h> magicaltrout: at times...then it does magical things and tries to make up for it
<thedac> just fyi, still not able to get in. No email received
<thedac> Sure this is not restricted to a specific circle?
<rick_h> thedac: try https://hangouts.google.com/call/kwu2kkxx5ve5rdlgkdcznmwoeae
<thedac> That seems to be working
<rick_h> arosales: marcoceppi lazyPower and anyone else ^
<arosales> youtube play list
<arosales> https://youtu.be/OBseJVHuVXI?list=PLW1vKndgh8gJS4upPNaXiYYHnCmFdWk03
<magicaltrout> sat in a darpa webex and office hours, this is like multi tasking overload
<mbruzek> Can I get a link to "watch" the juju sho?
<magicaltrout> mbruzek: https://www.youtube.com/watch?v=K-cWDvM2zts
<mbruzek> Thanks magicaltrout
<mbruzek> marcoceppi: is it like brackets?
<arosales> magicaltrout: lolz re darpa webex :-)
<marcoceppi> mbruzek: hah, you wish
<cholcombe> how do you remove a storage pool from juju lol?  I can't seem to find the cmd
<arosales> cholcombe: looking here https://jujucharms.com/docs/stable/charms-storage
<arosales> I think it is also dependent on when you want to remove the pool.
<cholcombe> arosales: yeah i was looking there
<cholcombe> everything there is about creating storage pools.  nothing about removing them
<cholcombe> is it tied to the controller maybe?
<arosales> ya I am also looking for remove
<marcoceppi> https://insights.ubuntu.com/2017/02/10/webinar-getting-started-with-the-canonical-distribution-of-kubernetes/
<arosales> cholcombe: may have to ping in #juju-dev
<cholcombe> arosales: ok
<arosales> cholcombe: I think wallyworld and axw were working on storage and there was some gaps
<arosales> cholcombe: would be interested what you find out for 2.1 support
<cholcombe> arosales: yeah i've talked with axw a few times.  I might ping him on this
<arosales> cholcombe: +1
<arosales> I think they come online here in a bit
<arosales> ~3-4 hours
<arosales> I think you can catch them more easily on west coast time
<arosales> cholcombe: sorry I don't have a better answer
<cholcombe> arosales: no worries
<arosales> cholcombe: asked folks in the hangout too, but no command that we were able to find
<cholcombe> arosales: yeah i don't think it exists.  that is really strange
<rick_h> axw: the destroy is coming? ^
<kklimonda> interesting, juju is hardcoded to connect to streams.canonical.com, even if local mirrors for tools and images are configured
<kklimonda> that was a cause of a very long delay between juju deploy and MAAS starting to provisiong a machine
<cholcombe> arosales: do you know the env variable i have to set to allow loopback devices in lxc with juju?
<cholcombe> axw: ^^
<arosales> cholcombe: is that ENV or LXD profile?
<cholcombe> arosales: i thought i saw in a PR that you could set an env variable for juju and it would set the profile on creation
<cholcombe> arosales: http://reviews.vapour.ws/r/1154/
<cholcombe> looks like it says if StorageConfig.AllowMount is true than it sets it
<arosales> cholcombe: ah ok :-)
<arosales> good to know thanks cholcombe
<cholcombe> arosales: https://github.com/juju/juju/pull/1826/files#diff-abd61728f26e92bea6ee732aa19f7808R17
<arosales> cholcombe: good stuff
<cholcombe> we need to document this.  this was entirely too hard to find
<arosales> cholcombe: could you file a bug @ https://github.com/juju/docs/issues/
<cholcombe> arosales: yup will do
<rahworks> Hello all I followed the âkubernetes cluster Easy Way" Tutorial and decided to add deis workflow to it. The helm install failed at the deis install step with the following message. Error: forwarding ports: error upgrading connection: Upgrade request required.. Any suggestions on how to proceed?
<lazyPower> rahworks - the "upgrade request" bits are due to the APILB charm. Its a layer7 router and it doesn't support SPDY, which kubernetes requires. There's a bug to replace this with an ELB as i know you are an AWS shop - https://github.com/juju-solutions/bundle-canonical-kubernetes/issues/183
<lazyPower> rahworks - additionally we are cycling towards replacing with HAProxy for the non cloud-koolaid version of that service
<lazyPower> rahworks - there is a published work around for the helm installer failure because of the apilb - https://kubernetes.io/docs/getting-started-guides/ubuntu/troubleshooting/#common-problems
<rahworks> lazypower thanks for the links, will take a look.
<ravenpi> I love deploying to containers, because it "boots" so darn fast.  But it seems I always need a "base" charm installed on a physical system before I can install additional charms to containers on that physical system.  Is there a dummy juju charm I could install to set up a physical system, then install the charms I actually care about as containers?
<lazyPower> ravenpi - juju deploy ubuntu
<ravenpi> D'oh!  It is safe to say I would never have thought of that.  Too darn obvious.  :-)  Thanks!!
<lazyPower> anytime :) we've hidden it in plain sight.
<ravenpi> +1
<lazyPower> rahworks - you're going to find another issue after that helm work-around though
<lazyPower> rahworks - the deis chart is going to try to provision a service of type LoadBalancer, which will never be satisfied. bdx and i spoke to this lastnight
<rahworks> ohh ok...
<lazyPower> rahworks - we'll need to submit a patch for an alternative path (users on bare metal will love us for this) where it uses NodePort service type, or we'll have to manually configure an HAProxy alternative to forward the deis requests to the router
<lazyPower> rahworks - additionally, if you have ingress=true on your workers, the deployment will never complete, as the ingress controller is occupying port 80, which the deis router wants.
<rahworks> One extra step I did do was to allow all tcp traffic from my workstation...via sg
<lazyPower> rahworks - are you trying this in lxd?
<rahworks> just applying the dais install to the canonical-kube install... nothing lxd specific
<rahworks> just applying the deis install to the canonical-kube install... nothing lxd specific
<lazyPower> ok, that statement of "allow all traffic from my workstation" caught me off guard, not sure how that fits into the order of operations here.
<rahworks> ohh... well after i updated the config to point to port 6443, the security groups setup for the master defaults to block that port.
<lazyPower> rahworks - you should be able to juju expose kubernetes-master, and get that unblocked. its just defaulted to unexposed in HA formation to help isolate traffic away from it (read: slightly more secure by default)
<rahworks> ohh ok..
<bdx> lazyPower: container networking in 2.1 you say???
<lazyPower> bdx - kick the tires and tell us what works for you and what doesnt :)
<bdx> omg - on it
<lazyPower> bdx https://lists.ubuntu.com/archives/juju/2017-February/008595.html
<cholcombe> evilnick___: so yeah that setting doesn't seem to do it
<cholcombe> is there a way to list the model config values currently set?
<evilnick___> cholcombe, :(
<evilnick___> yes, just juju model-config
<cholcombe> evilnick___: cool.  do i need to create the model before this'll take?
<evilnick___> cholcombe, you can set it as a default for models, then create a new model
<evilnick___> i'm not sure if that will make a difference
<cholcombe> ok then yeah it doesn't work :-/
<evilnick___> ah. in that case maybe it didn't make it to LXD
<cholcombe> evilnick___: yeah wallyworld just commented
<cholcombe> that stinks.  i really want to test gluster on lxd so i can have it grab some floating ip's from my bridge and use them
<cholcombe> allocating elastic ip's on ec2 is a pain
<wallyworld> there are security issues with loop mounts
<evilnick___> cholcombe, well, sorry about that, but at least I can now turn that issue into removing the reference from the storage page.
<cholcombe> wallyworld: yeah i saw in the commit
<wallyworld> it was enabled with lxc because those were prvileged
<cholcombe> i see
<cholcombe> wallyworld: can we at least give people the option to use said unsafe thing?
<wallyworld> there are plans to come up with a solution, but there are things to consider so we don't do bad things
<cholcombe> i'm totally onboard with this being bad in production but for dev it's difficult to work around
<wallyworld> yeah understood
<wallyworld> it's something that fell off the "most important thing to do next" list
<cholcombe> yeah i understand
<wallyworld> i'll prod the right folks to look into it
<cholcombe> wallyworld: are there any block devices that'll work on my local lxd?
<wallyworld> without loop devices, i don't think so
<cholcombe> gah
<wallyworld> leave it with me and i'll dig to get a proper answer
<cholcombe> ok
<cholcombe> wallyworld: maybe a zfs provider :)
<cholcombe> that would be sweet for local
<wallyworld> yeah, it would be
<wallyworld> LXD is getting new storage APIs this cycle - juju was going to support those but we've had a team restructure and may need to drop that work
<cholcombe> ah interesting.  i'll talk to rockstar about it and see what he knows
<wallyworld> so it is recognised we have work to do, but not the people to do it as the next priority at the moment
<axw> cholcombe: there's no command to remove pools, and the config attribute we use to have for loop devs no longer exists in juju 2
<axw> cholcombe: you would have to make the necessary changes to the lxd profile by hand
<cholcombe> axw: i see
<cholcombe> can i set it globally for lxd?
<axw> cholcombe: you can set it in the default profile
<axw> cholcombe: juju also creates profiles for each model, so you can set it on a per-model basis
<cholcombe> axw: cool.  if that was in the docs that'd be super helpful :D
<cholcombe> i wouldn't have to bug ya
<axw> cholcombe: I'll find out exactly what profile changes are required and update the docs issue
<cholcombe> axw: awesome!  I'm looking forward to test driving this
<axw> cholcombe: https://github.com/juju/docs/issues/1665
<cholcombe> axw: :D
 * cholcombe high fives axw
<axw> cholcombe: btw, LXD is adding a storage API that we'll make use of when we're both ready
<cholcombe> cool
<axw> cholcombe: so you'll be able to add volumes into a container programmatically
<cholcombe> sounds good to me !
<andrew-ii> Is it recommended to deploy charms one at a time, or is it better to use bundles?
<andrew-ii> I've been trying to get a modified openstack bundle to deploy properly, but I'm wondering if it's smarter to build it piece by piece
#juju 2017-02-16
<Budgie^Smore> well it has been one of those days!
<lazyPower> Budgie^Smore how so?
<Budgie^Smore> ended up reinstalling the work laptop ... windows to ubuntu
<lazyPower> ah the great nuking and repaving day
<Budgie^Smore> working on a Vagrantfile to create the different VMs that I need... came across an old git repo of someone doing that for MaaS
<Budgie^Smore> oh and the window to ubuntu change was cause I couldn't get ssh to work nicely so I could ssh and start a VM headless from the CLI
<kjackal> Good morning juju world!
<kklimonda> when a hook is failing, I can run juju debug-hooks unit to see what's happening
<kklimonda> how can I change options being passed to the hook?
<kjackal> hi kklimonda I am not sure you can change the parameters passed to the hook. However since you are in debugging mode you can alter the values in tha parameters you already get, right?
<kklimonda> yes, I can definitely do that
<kklimonda> or I could if this wasn't used in a dozen different places.. I have to figure out where to set it in hooks
<Zic> lazyPower: ping back
<Zic> (but you're probably sleeping :p)
<kklimonda> if bundle deployment fails on one machine, can I retry just this one machine?
<BlackDex> i'm working on a shared server with someone, and we both use juju but with different envirionments
<BlackDex> i mean models
<BlackDex> are there envirionment variables i can use so that juju selects that model instead?
<BlackDex> because if i, or someone else does juju switch, its selecting that model
<BlackDex> JUJU_ENV isn't working
<anrah> would JUJU_MODEL work?
<BlackDex> no idea
<BlackDex> lets check
<BlackDex> yes
<BlackDex> that sounds like a nice feature
<BlackDex> :)
<BlackDex> A pitty it isn't documented
<BlackDex> hmm it is on github i see
<anrah> BlackDex: https://jujucharms.com/docs/stable/reference-environment-variables
<BlackDex> how could i have missed that
<BlackDex> i search for environment in the docs
<BlackDex> :S
<BlackDex> probably some typo
<BlackDex> thx!
<anrah> np! :)
<anrah> Does bundle deployment work with manual cloud?
<anrah> I have manually added couple servers to juju and I would like to use bundle-file to deploy my apps
<anrah> I get errors for each machine:
<anrah> placement "0" refers to a machine not defined in this bundle
<anrah> when setting machine-part on the bundle, i get error:
<anrah> ERROR cannot deploy bundle: cannot create machine for holding my-charm unit: cannot add a new machine: use "juju add-machine ssh:[user@]<host>" to provision machines
<alexlist> @jcastro: in LP#1662172 I mentioned that conjure-up didn't copy .kube/config and kubectl to the controlling host, and just noticed the workaround is documented here: https://github.com/juju-solutions/bundle-canonical-kubernetes/tree/master/fragments/k8s/core - however to streamline things, I suggest to amend the docs to copy kubectl to ~/bin which should be in people's path if they use the default .profile from /etc/skel
<jcastro> I did this a bunch of time yesterday and at the end it prompted and copied the binary over
<jcastro> oh, and also, if not obvious, for 1.5.x we stopped bundling the elastic bundle by default, though you can still deploy after the fact
<jcastro> oh ok, I see we put that workaround in the docs anyways: https://kubernetes.io/docs/getting-started-guides/ubuntu/installation/
<jcastro> alexlist: any other feedback on that page before I start a PR?
<alexlist> jcastro: Not yet, will probably redo the whole thing once more in a VM to verify...
<jcastro> did you deploy the bundle manually or via conjure?
<alexlist> conjure
<jcastro> ok, at the end conjure should prompt you to copy the creds and binary over automatically
<alexlist> Lemme try this once more...
<jcastro> if that doesn't happen lmk the version of conjure and we can have stokachu take a look
<alexlist> ok
<jcastro> https://github.com/kubernetes/kubernetes.github.io/pull/2556
<jcastro> a review here would be lovely!
<stokachu> alexlist, yea lemme know as it's supposed to copy to ~/bin automatically in the steps view
<jcastro> oh ok so conjure already copies to ~/bin?
<jcastro> ok so really the manual steps were the ones that needed to be fixed then
<jcastro> then when rye has the readmes use the upstream markdown instead of the bundle markdown it should all generate from one source of truth instead of the two we have now
<ahasenack> marcoceppi: hi, around?
<ahasenack> does anybody know if there is a python3?-libchamstore package for trusty somewhere? It used to be in ppa:juju/devel, but the latest build there failed
<marcoceppi> ahasenack: I'm fixing that today
<ahasenack> the trusty build?
<ahasenack> I actually don't know what is pulling libcharmstore in
<marcoceppi> charm-tools
<ahasenack> where will it be uploaded to?
<marcoceppi> juju/stable
<SimonKLB> hey marcoceppi! could you tell me what the current best practice is for exposing charms deployed in lxd containers on aws? when im running locally with nested lxd containers i create some simple NAT rules in iptables, but when im running in a public cloud i also need to get the proper security rules in place and those doesnt seem to be created when executing `juju expose X` on the containerized application
<SimonKLB> (sorry, that was a long question) :D
<rick_h> SimonKLB: since you can't get at the containers from the outside you need something to help proxy things.
<rick_h> SimonKLB: usually i setup a HAProxy on the root of the host
<marcoceppi> rick_h: does the network setting changes in 2.1 help address this?
<rick_h> marcoceppi: not atm. You still can't get multiple addresses/mac addresses so the containers are internet addressable
<SimonKLB> rick_h: right! it would be useful to have some mechanism to expose containerized applications though, for example in the openstack bundle where lots of components are put in lxd containers instead of directly on the host
<rick_h> SimonKLB: yes, agreed. So the team is working to enable that when there's something that allows it. The network changes in 2.1 are a move in that direction and I know that by 2.2 the idea is to have that on places like manual provider, openstack, places where you might be able to get dhcp to the containers for root level ip addresses
<rick_h> SimonKLB: but AWS does some things in their SDN that only allows the one mac address on hosts so it's harder to get containers exposed
<SimonKLB> rick_h: even without dhcp access, NAT:ing could be an option, you just need to first open the ports in the security rules
<rick_h> SimonKLB: right, but because it has to be one mac the host can't NAT multiple things inside and tell who it goes to is my understanding. It could only do something based on one container per port perhaps.
<rick_h> SimonKLB: but yea, atm you have to handle that via a proxy and config and such but the team's actively working on it across many of the provders
<SimonKLB> rick_h: yea, that is what im doing right now, adding NAT rules with destination port as the match, so if you expose an application that is running a service on port 80 you add the iptables NAT rule and then expose the application like you would if it was running on the host
<SimonKLB> for example exposing keystone in the openstack-base bundle: iptables -t nat -A PREROUTING -p tcp --dport 5000 -j DNAT --to [private ip]:5000
<SimonKLB> this works great when you're running on localhost
<alexlist> stokachu: it did indeed copy the files to ~/.kube/config and ~/bin/kubectl, but the last steps in conjure-up still threw errors
<stokachu> alexlist, what error?
<alexlist> stokachu: http://pastebin.ubuntu.com/24007303/
<Zic> I had the same error today alexlist / stokachu
<Zic> conjure-up tries to run kubectl get nodes / get pods before the cluster is ready during the deployment
<alexlist> stokachu: most likely a race condition - now it tried to copy the files even though the deploy isn't finished yet
<alexlist> what Zic said...
<Zic> (I just ignored the error, and the installation finished correctly btw)
<stokachu> what rev are you guys on?
<Zic> let me check
<stokachu> snap list conjure-up
<Zic> I don't use the snap
<Zic> it's the PPA version
<stokachu> Zic, ah, should migrate to the snap when you can
<Zic> 2.1.0-0~201701041302~ubuntu
<Zic> stokachu: noted :)
<alexlist> 2.1.0-0~201701041302~ubuntu16.10.1
<Zic> I'm on Ubuntu 16.04
<stokachu> yea you guys should be using the snap version
<Zic> ok
<stokachu> my test runners haven't seen this error yet if you do a `sudo snap install conjure-up --classic --candidate`; `sudo apt-get remove conjure-up juju-2.0`
<alexlist> stokachu: I just followed https://jujucharms.com/canonical-kubernetes/ which tells me to use the PPA...
<stokachu> conjure-up provides everything you need
<stokachu> alexlist, yea ive got a PR to get that changed
<stokachu> alexlist, hasn't landed yet
<alexlist> \o/
<stokachu> with snaps you can deploy on trusty now too
<stokachu> latest juju etc
<Zic> stokachu: I don't have any preference between deb or snap, but https://jujucharms.com/canonical-kubernetes/ should be upgraded with the snap package installation I guess
<Zic> it's the only reason of why I used the PPA :)
<stokachu> Zic, yep soon as they land the PR and bup charm revs
<Zic> cool
<BlackDex> Is there someone here who can help me with some problems with the nrpe charm? It doesn't install/announce the disk/mem/cpu checks? stub, blahdeblah, hloeung, pjdc_ or someone else?
<magicaltrout> there's a bug and a fix open for that I think BlackDex
<stokachu> Zic, alexlist once Juju 2.1 GA ive got deb updates that will point you to the snap install version
<alexlist> stokachu: ok.
<magicaltrout> cause i used it a bunch of times
<BlackDex> magicaltrout: if you mean bug 1605733, i couldn't find that file anymore in the new charms
<mup> Bug #1605733: Nagios charm does not add default host checks to nagios <canonical-bootstack> <family> <nagios> <nrpe> <unknown> <nagios (Juju Charms Collection):New> <https://launchpad.net/bugs/1605733>
<alexlist> stokachu: Do you think this will work on a plain Debian as well, or are there too many things missing? Just asking, as I have to deal with a managed hosting provider who prefer Debian ...
<stokachu> alexlist, i think there are efforts to get snapd running on debian
<magicaltrout> yeah BlackDex well that was my issue 3 weeks ago
<stokachu> i think it already does on the latest
<magicaltrout> i've not checked it since
<magicaltrout> but it was around like that for ages
<stokachu> alexlist, you should join #snappy and see if one of those guys know more
<stokachu> alexlist, but yea, the theory is wherever snappy can run you'll be able to use conjure-up
<stokachu> fedora, arch etc
<BlackDex> i will check it again by using the charm-tools to download the charm
<magicaltrout> i just juju ssh into the unit and hack around the code, but whatever floats your boat :)
<BlackDex> that is also an option, but not if you want to deploy it to a lot of instances
<magicaltrout> http://bazaar.launchpad.net/~charmers/charms/precise/nagios/trunk/view/head:/hooks/common.py
<magicaltrout> i don't think that has been updated BlackDex
<magicaltrout> so I think that bug is still valid
<BlackDex> man, i think i'm looking in the wrong charm now :p
<BlackDex> i need to check the nagios charm, and not the nrpe
<magicaltrout> yup
<BlackDex> doh
<magicaltrout> thats why I just hacked the code :)
<BlackDex> lets see if that is the case for the xenial version also
<BlackDex> that makes a bit more sense
<BlackDex> oke
<BlackDex> lets see what that does :)
<alexlist> stokachu: ok, with the snap versions everything works as it should
<stokachu> alexlist, great! thanks for testing
<BlackDex> magicaltrout: Thx for clearup, i now installed the latest nrpe charm with the manual patched nagios, that seems to work
<magicaltrout> no problem BlackDex
<magicaltrout> in other news.... someone just brought a parrot into our kitchen at work....
<BlackDex> parrot wants a cookie
<lazyPower> Zic  :D
<Zic> lazyPower: the production cluster is all fine, I have just one little trouble: the kube-dns pod restart sometime with no reason other than 21h    3m    24    {kubelet mth-k8svitess-02}    spec.containers{dnsmasq}    Warning    Unhealthy    Liveness probe failed: HTTP probe failed with statuscode: 503
<Zic> I mitigated this with scaling the kube-dns deployment to 5 replicas instead of 1
<Zic> (it keeps restarting sometime, ~1 every two hours, but at least, they are other kube-dns pod which not restarting in the same time that can handle request)
<magicaltrout> when in doubt.... replicate!
<lazyPower> Zic - fantastic, we were discussing on if we should scale that... i think there's a bug for this actually
<Zic> magicaltrout: already done :p
<lazyPower> Zic - https://github.com/juju-solutions/bundle-canonical-kubernetes/issues/181
<Zic> thx
<lazyPower> not 1:1 the same, but if you coudl add your comments there it would add some weight
<lazyPower> and we can probably get that scheduled for post 1.6 release
<Zic> other than that, all is fine, I'm running on 1 master for now but I will switch to nominal-3 when you're patch will be officially released regarding my bug :)
<lazyPower> Zic - well i think we want to use teh autoscaler addon tbh
<lazyPower> as its using metrics to drive the scale
<lazyPower> surely it by defalt wants HA
<lazyPower> *default
<Zic> s/you're/your/
<magicaltrout> Im pushing for CDK to be used on a Darpa project here lazyPower, dunno if it'll win, but i'm tryin'
<lazyPower> magicaltrout - <3 <3 <3
<lazyPower> magicaltrout - let me know if there's anything we can do to help support this effort
<magicaltrout> slap some sysadmins?
<Zic> I have just a landing page with a logo publicly available through this K8s cluster :D
<lazyPower> Zic - no fancy mobile game servers? ;)
<Zic> the real start is planed for the end of the month
<Zic> lazyPower: it's about video streaming with paid access :)
<Zic> (no, it's not p*rn !!)
<magicaltrout> i'll have to get SaMnCo 's GPU stuff in there we've got like 100+ GPUS at launch
<magicaltrout> Zic: you're streaming porn with CDK?!!! ;_
<Zic> nope :p
<SaMnCo> magicaltrout: yooohoooo
 * magicaltrout tweets Zic 's revelation....
<SaMnCo> What GPU?
<lazyPower> Zic - schenanigans, sounds like p*rn to me
<magicaltrout> i think most are titan X's SaMnCo but there are a range of different ones being brought in from other projects
<magicaltrout> plus we got told off by Nvidia for using off the shelf hardware instead of the stupidly marked up "data processing GPUs"
<magicaltrout> "how dare you use our chips for anything other than Games without us apply a 100% markup!"
<lazyPower> O_o
<lazyPower> obvious marketing effort is obvious
<magicaltrout> yup
<Zic> lazyPower: when I wrote "it's about video streaming with paid access" I immediately realized that it will be acknowledged as p*rn :p
<Zic> so I precised it :D
<lazyPower> Zic - or netflix, people like to conflat the two
<magicaltrout> i like the fact no one can write porn
<Zic> magicaltrout: don't know if it's a banned word :p
<magicaltrout> well lazyPower has yet to tell me off
<Zic> seems there is no bots in this chan
<lazyPower> i think its a fine word in the current context
<magicaltrout> so i assume its acceptable, we are mostly adults after all :)
<jrwren> I'm going to imagine it is sports.
<magicaltrout> hehe
<lazyPower> if you were like explicit i might have you remind you and i of the Code of Conduct
<Zic> :D
<lazyPower> however, we're all being sensible
<magicaltrout> http://www.darpa.mil/program/data-driven-discovery-of-models this is what we're working on lazyPower
<magicaltrout> 4 year programme about discovering data
<lazyPower> magicaltrout - wow this is a complex etl stack, its not really fully decomposed
<lazyPower> this is the 10k foot diagram right?
<Zic> plus, I talked about lazyPower's body some days ago, so I'm limiting myself about prohibited word :D
 * Zic left
<lazyPower> oh my
<lazyPower> you had to bring that back up didnt you
<lazyPower> hawkwarddddd
<Zic> :p
<magicaltrout> http://www.darpa.mil/news-events/2016-06-17
<lazyPower> yeahhh ml for ml
<lazyPower> your recursion is neat :D
<magicaltrout> yeah lazyPower lots of crazy GPU powered machine learning to run over public datasets to try and automatically detect the models without users having to write the code
<lazyPower> and phase 1 of skynet will have been delivered
<lazyPower> the fact CDK might possibly be empowering skynet, is kinda neat
<magicaltrout> Zic: https://irclogs.ubuntu.com/2017/02/16/%23juju.html you mean this publically accessible log ? ;)
<Zic> lazyPower: I kept the meme's image you gave me as goodies :p
<magicaltrout> Zic 's love of lazyPower is forever preserved
<Zic> xD
<lazyPower> <3
<Zic> it's why I gave the Juju OpenStack project to one of my colleague, I only want to use Juju if there is "lazyPower parts"
<magicaltrout> i wouldn't recommend that as a life choice ;)
<lazyPower> ^
<Zic> fun fact: Canonical commercial's support answered us with a "About your OpenStack Kubernetes stack"
<Zic> I only mentioned Kubernetes in my contact mail...
<Zic> :>
<Zic> (we're going to buy commercial's support at term, when it will really go prod)
<mbruzek> awesome Zic
<lazyPower> Zic - i appreciate your contributions to my pizza budget
<Zic> :D
<magicaltrout> that is the largest budget in Canonical
<magicaltrout> more than Marks private jet costs...
<Zic> the next Ubucon EU is at Paris
<Zic> iirc
<Zic> if you want your pizzas :p
<magicaltrout> sod ubucon
<lazyPower> i'm a firm beleiver in paying it forward
<magicaltrout> get lazyPower to sponsor you to juju charmer summit
<Zic> http://ubucon.org/en/
<lazyPower> Zic - if you can enrich someone elses life with a pizza, pay it forward to them and i'll be happy
<Zic> :D
<Zic> lazyPower came to EU one time, he feared now
<Zic> (because of me)
<magicaltrout> na he got taken away by a weird british guy and his belgian friend.... he's not been the same since
<lazyPower> ^ true story
<lazyPower> WWI re-enactment actors
<lazyPower> i got a heck of a history lesson that night magicaltrout
<magicaltrout> thats what you tell us lazyPower
<magicaltrout> no one else was there to witness it
<lazyPower> a gentleman never tells
<Zic> xD
<lazyPower> but what happened does *not* rhyme with zic's business model
<magicaltrout> hehe
<SaMnCo> Zix who is your sales rep?N
<SaMnCo> @Zic you is your sales rep?
<SaMnCo> missed the key :/
<SaMnCo> @magicaltrout : so it is bare metal stuff?
<magicaltrout> SaMnCo: we've got some baremetal stuff some openstack stuff
<magicaltrout> depends where stuff gets deployed
<Zic> SaMnCo: we don't have any contact for now, we're just preparing it, but our contact is Mac Belonwu
<Zic> (I don't answer him for now, as I need to do some COMEX discussion at our company before :/)
<Zic> SaMnCo: or maybe I misunderstand your question: nope, I'm just a sysadmin at our company :)
<SaMnCo> OK. I usually cover EU for pre-sales stuff around k8s, so any question or issue don't hesitate to involve me
<magicaltrout> SaMnCo is trying to steal commission ! ;)
<SaMnCo> ahahah :D
<SaMnCo> you know sales / pre-sales, it's like Jehovah people
<SaMnCo> always go by 2
<Zic> we're Paris-based, so maybe you're more involved in my request than your colleague
<magicaltrout> that then try and extor money from you with tales of woe! you're correct its identical
<magicaltrout> silly keyboard -that s/extor/extort
<SaMnCo> Zic: no no, I just wanted to understand where you were in the process to see if there was a need for tech support on your end. I contacted Mac, so we're all set.
<Zic> SaMnCo: we just sent a request via ubuntu.com for now, we didn't really (he just had our sales rep on the phone) answer to him as we don't have all the elements
<SaMnCo> ok
<Cynerva> Hey folks, if I git clone juju and run `snapcraft`, does it build from the local repo?
<Cynerva> I want to try something that's in the 2.1 branch but not in a release candidate yet
<Cynerva> I'm not familiar with golang or the juju repo, so I just want to make sure the resulting snap has whatever's in the tree :)
<neiljerram> Is there a way that I can have all units in a bundle deployed to machines in the same GCE zone?
<marcoceppi> neiljerram: you can, but it's typically not advised
<neiljerram> marcoceppi, I'm wondering if it might help with a problem I'm seeing with 'juju ssh'
<lazyPower> neiljerram - i can confirm i've been experiencing networking issues in google/us-central1 region
<lazyPower> and it started this morning, it was fine lastnight.
<neiljerram> lazyPower, yes, that's where I've been seeing issues too.
<lazyPower> neiljerram - i moved to us-east1, its a bit slower, but it doesn't have the same connectivity issues
<neiljerram> lazyPower, But I believe my issues are much more longstanding than just the last day or two.
<lazyPower> and more to the point of being obnoxious, its intermittant
<neiljerram> What is the symptom that you see?
<lazyPower> neiljerram - i have issues connecting between untis in different az's and i have som external connectivity failures, specifically with their apt mirror.
<lazyPower> i had an etcd cluster tank during testing because AZ-a wasn't able to talk to AZ-c for whatever reason
<neiljerram> Interesting.  The thing I notice first, in my case, is 'juju ssh ...' failing.  But it could also be that there are connectivity issues between the deployed units.
<lazyPower> that would be consistent actually if your controller is in a different AZ than your unit
<lazyPower> i do believe that default behavior is to proxy through the controller to establish ssh, but that is configurable
<neiljerram> Ah, that sounds good, where is the switch for that?
<lazyPower> proxy-ssh                   default  false
<lazyPower> i see its defaulted to false here, so looks like i may be wrong
<lazyPower> neiljerram - fyi - juju model-config  or juju model-defaults
<neiljerram> I have false as well, already - so guess that's good.
<neiljerram> So for getting all unit machines in the same zone, I think I just discovered the method for that:
<neiljerram> for n in `seq 1 10`; do juju add-machine zone=us-central1-c; done
<neiljerram> But I still have my controller in a different zone (us-central1-a)...
<neiljerram> I guess that setting the zone for the controller would need to be something on the 'juju bootstrap# invocation.  Any ideas?
<lazyPower> neiljerram - there are bootstrap-constraints
<lazyPower> juju bootstrap --help has an overview
<neiljerram> I already had that help in my terminal, but hadn't seen --bootstrap-constraints.
<neiljerram> So would it be: juju bootstrap google/us-central1 bundle --config image-stream=daily --bootstrap-constraints zone=us-central1-c
<neiljerram> (Just waiting for my existing controller to die so I can try myself.)
<neiljerram> No, ERROR unknown constraint "zone"
<lazyPower> hmmm
<lazyPower> i would have thought that would have mirrored the constraints you can pass to --constraints
<lazyPower> i admittedly have not attempted to pass a zone constraint on those constraints.
<neiljerram> Ah, it's --to instead of --bootstrap-constraints.
<neiljerram> (Discovered from code reading!)
<neiljerram> So: juju bootstrap google/us-central1 bundle2 --config image-stream=daily --to zone=us-central1-c
<lazyPower> oh neat
<lazyPower> #TIL
<freyes> hi marcoceppi , I noticed that ceph-proxy doesn't exist under https://bugs.launchpad.net/charms/ , so it's not possible to file bug against it ( https://jujucharms.com/ceph-proxy/xenial/0 ), could you it? or do you know who could do it?
<stormmore> finally gotten around to installing an IRC client
<stormmore> o/ juju world!
<marcoceppi> o/ stormmore welcome back :)
<stormmore> hate doing workstation reinstall but sometimes ya gotta do what ya do!
<lazyPower> stormmore - :) :) w
<lazyPower> *wb
<Budgie^Smore> hmm this is weird
<stormmore> OK this is anonying, didn't use to get ping timeouts :-/
#juju 2017-02-17
<Budgie^Smore> OK well that explains why I could use this nick earlier
<bdx> =1
<kklimonda> how do I change location from which Juju downloads lxc images?
<Zic> lazyPower: this RESTARTS column stress me a bit, but since I scaled them 1->5, at least they do not restart at the same time, so no interruption normally... do you have this kind of behavior in your labs? it's the same "spec.containers{dnsmasq}    Warning    Unhealthy    Liveness probe failed: HTTP probe failed with statuscode: 503" error at everytime
<Zic> http://paste.ubuntu.com/24012539/
<Zic> (oh, I'm not polite today : Hello!)
<SimonKLB> im unable to view the charmpage of my unpublished charm in the new charmstore gui
<SimonKLB> i can see the charm in my list of charms, but when i click on the "View" button it says "There was a problem while loading the entity details. You could try searching for another charm or bundle or go back."
<rick_h> tvansteenburgh: ping
<tvansteenburgh> rick_h: yo
<rick_h> tvansteenburgh: hey, can I steal a few minutes to talk libjuju usage stuff sometime?
<tvansteenburgh> rick_h: yeah sure
<lazyPower> Zic - no, my dns container has 4 restarts in 61 days
<lazyPower> Zic -  i'll shop it around though see what IS is seeing in their long running deployment
<rick_h> tvansteenburgh1: is this a not yet done method then? https://github.com/juju/python-libjuju/blob/master/juju/controller.py#L209
<tvansteenburgh1> rick_h: correct
<tvansteenburgh> rick_h: if i know someone needs it i'll prioritize it, so you can either file a bug for that, or take a shot at implementing it and post a PR
<rick_h> tvansteenburgh: yea, just poking at the facade stuff and will see what I can do ty
<Zic> lazyPower: we have 124 pods which is running (if it's tied to the number of pods)
<lazyPower> Zic - well i only have 36 running, and only 1 dns pod
<lazyPower> Zic - so your deployment volume is much higher. have you been watching the resource utilization of the pod in the dashboard?
<lazyPower> is the DNS service using quite a bit of ram? any conntrack table issues in the logs?
<lazyPower> Zic - i ask because rimasz found some issues with a competing solutions dns deployment, it filled up the conntrack table and started dropping packets like crazy before the pod itself was terminated due to failing health checks.
<lazyPower> that *might* be a similar situation but i doubt it, i would think you'd have found other issues as a symptom if that were the case
<Zic> lazyPower: I got some dnsmasq[1]: Maximum number of concurrent DNS queries reached (max: 150) logs that I success to collect directly via `docker logs` (kubectl logs on the kube-dns pod returned nothing)
<Zic> it's maybe the problem
<lorenzotomasini> Hi *, i am trying to implement a python juju client, but when using the modle.deploy() i get an error:
<lorenzotomasini> Exception in thread Thread-6:
<lorenzotomasini> Traceback (most recent call last):
<lorenzotomasini>   File "/usr/lib/python3.5/threading.py", line 914, in _bootstrap_inner
<lorenzotomasini>     self.run()
<lorenzotomasini>   File "/usr/lib/python3.5/threading.py", line 862, in run
<lorenzotomasini> in the doc i found
<lorenzotomasini>  :param str to: Placement directive, e.g.::
<lorenzotomasini>             '23' - machine 23
<lorenzotomasini>             'lxc:7' - new lxc container on machine 7
<lorenzotomasini>             '24/lxc/3' - lxc container 3 or machine 24
<lorenzotomasini>             If None, a new machine is provisioned.
<lorenzotomasini> but actually passing the simple machine number gives me the above error
<lorenzotomasini> can somebody give me a hint?
<lorenzotomasini> thanks in advance
<magicaltrout> tvansteenburgh: -^
<tvansteenburgh> lorenzotomasini: can i see your code and the full traceback?
<lorenzotomasini> sure
<lorenzotomasini> so my code
<lorenzotomasini> async def deploy_local_charm(charm_dir_path, application_name, number_of_units, machine_number=None, model=None):
<lorenzotomasini>     application = await model.deploy(charm_dir_path,
<lorenzotomasini>                                      application_name,
<lorenzotomasini>                                      num_units=number_of_units,
<lorenzotomasini>                                      to=str(machine_number))
<lorenzotomasini>     log.debug("deployed application: %s", application)
<lorenzotomasini> and the full stack is:
<lorenzotomasini> Traceback (most recent call last):
<lorenzotomasini>   File "/usr/lib/python3.5/threading.py", line 914, in _bootstrap_inner
<lorenzotomasini>     self.run()
<lorenzotomasini>   File "/usr/lib/python3.5/threading.py", line 862, in run
<lorenzotomasini>     self._target(*self._args, **self._kwargs)
<tvansteenburgh> lorenzotomasini: please put it in a pastebin
<magicaltrout> ho ho ho
<lorenzotomasini> https://p.rrbone.net/paste/F3bTxZTB#Z1bY+UVIQ-t10wSxo5fHIpgaxppO58da3fgBGXUt2w/
<tvansteenburgh> lorenzotomasini: okay, and what is the value of machine_number?
<lorenzotomasini> 112, int
<lorenzotomasini> what should it be?
<lorenzotomasini> actually it is a str
<petevg> lorenzotomasini, tvansteenburgh: the docstring is dated, and it's my fault. Sorry :-/  You need to pass a list to "to".
<petevg> So ['112'] should work.
<lorenzotomasini> petevg: i try this thanks
<tvansteenburgh> i think it also needs to be parsed
<petevg> tvansteenburgh: you're right.
<lorenzotomasini> tvansteenburgh: sry my phyton is not really good, how should i parse it?
<tvansteenburgh> to=placement.parse(machine_number)
<tvansteenburgh> from juju import placement
<petevg> lorenzotomasini: your Python isn't at fault. It's a tricky bit of code. (Thank you, tvansteenburgh)
<tvansteenburgh> something like that
<lorenzotomasini> ah but machine_number still has to be list of string or just str ?
<tvansteenburgh> just string, placement.parse will convert it
<lorenzotomasini> ok thanks
<lorenzotomasini> i'll let you know in some minuts
<tvansteenburgh> lorenzotomasini: example here https://github.com/juju/python-libjuju/blob/master/juju/application.py#L91
<petevg> tvansteenburgh: I'll submit an update to the docs shortly.
<tvansteenburgh> petevg: thanks!
<petevg> np
<lorenzotomasini> found this
<lorenzotomasini> to=placement.parse(str(machine_number))
<lorenzotomasini> from juju import placement
<lorenzotomasini> tvansteenburgh: I have another questionâ¦ is there a command in the sdk for executing the "juju add-machine ssh:<ip>" , i was not able to find it
<lorenzotomasini> tvansteenburgh: unfortunately the placement did not fix it https://p.rrbone.net/paste/So9PLneL#CsMGASLtg54WhuTbZkR39baUi-Is5bC4aSh8uPTnAyT
<tvansteenburgh> lorenzotomasini: re: add-machines https://github.com/juju/python-libjuju/issues/51
<tvansteenburgh> lorenzotomasini: need to see your code
<lorenzotomasini> tvansteenburgh: https://p.rrbone.net/paste/IVbSnOTe#d7jeohncqjo5SN437DmHLP9A5amB4fKe2VBcoi6udlS
<tvansteenburgh> ahhh
<tvansteenburgh> to=[dict(scope='#', directive=str(machine_number))]
<tvansteenburgh> lorenzotomasini: try that ^
<lorenzotomasini> tvansteenburgh: ok now i get a different error but that is my fault
<lorenzotomasini> tvansteenburgh: thanks
<lorenzotomasini> tvansteenburgh: butâ¦. when i fixed the last error, i gotâ¦ this one
<lorenzotomasini> https://p.rrbone.net/paste/gccTi3Fr#O7+-2oKOX6V2b6L8nzMRsd+6Agbs1nfc1C5FziEimIM
<tvansteenburgh> hmm
<tvansteenburgh> lorenzotomasini: seems to be a bug
<tvansteenburgh> lorenzotomasini: i'll work on a fix for this
<lorenzotomasini> tvansteenburgh: ok thanks, will this be included in an official release? if so, any chances of including the #51 github issue fix?
<tvansteenburgh> lorenzotomasini: yes and yes
<lorenzotomasini> tvansteenburgh: and last and most critical question: when do you think will this be available?
<tvansteenburgh> lorenzotomasini: early next week probably
<lorenzotomasini> :)
<lorenzotomasini> ok thanks
<tvansteenburgh> if you want to hack the libjuju source i can tell you how to work around the placement bug in the meantime
<lorenzotomasini> yeah sure
<tvansteenburgh> lorenzotomasini: comment out these lines https://github.com/juju/python-libjuju/blob/master/juju/model.py#L932-L937
<lorenzotomasini> tvansteenburgh: I see a placement-fix branchâ¦ is it already fixed there?
<tvansteenburgh> lorenzotomasini: no that was just a docstring update
<lorenzotomasini> tvansteenburgh: ah ok, if i comment these two lines it will still deploy on the machine i chose right?
<tvansteenburgh> lorenzotomasini: yeah, it should
<lorenzotomasini> tvansteenburgh: what shoul i put then here? https://github.com/juju/python-libjuju/blob/master/juju/model.py#L1003
<lorenzotomasini> placement=to,
<tvansteenburgh> yeah
<erlon> guys does anyone know where to configure the proxy setting for juju 2.0
<erlon> it use to be like this in 1.2.2 'juju set-env'
<kwmonroe> erlon: i believe it's 'juju model-config [http|https|no]-proxy=foo'
<erlon> kwmonroe: hmm, thanks ill try
<kwmonroe> erlon: i also think you can adjust model defaults so that subsequent add-model calls will use your proxy settings:  juju model-defaults *-proxy=foo
<kwmonroe> and whilst on the subject, you can see which proxy vars are supported with:  juju model-config | grep proxy
<cory_fu> kwmonroe: Hey, when you run java-devenv through cwr-ci, are you getting the "NoneType has no attribute series" error from charm proof still?
<kwmonroe> negative cory_fu: http://juju.does-it.net:5000/build_bundle_java_devenv/1/report.html
<cory_fu> kwmonroe: ok, thanks
<stormmore> are we having fun yet Juju world?
<kwmonroe> juju deploy lunch --to belly
<stormmore> seriously, this disconnect crap has got to stop, didn't have this issue on windows
<cory_fu> kwmonroe: I think you were right to push back on "sleep 10" as a work around for the "unremovable model" issue.  I can reproduce this on the CLI with `juju add-model foo; juju destroy-model -y foo`
<cory_fu> Looking for an existing bug now
<cory_fu> kwmonroe: https://bugs.launchpad.net/juju/+bug/1635052
<mup> Bug #1635052: Destroying model shortly after creation results in endless loop <ci> <destroy-model> <juju:Fix Committed> <juju 2.0:Won't Fix> <juju 2.1:Fix Committed> <https://launchpad.net/bugs/1635052>
<kwmonroe> cory_fu: thx for finding a faster repro.  i do loathe "sleep x" as a fix for anything, except insomnia.
<cory_fu> kwmonroe: Can you test it on the latest beta?
<kwmonroe> cory_fu: is rc2 out?
<cory_fu> Not that I'm aware of.  Did you still see this on beta5?
<cory_fu> Wait, is rc1 out already?  My snap didn't update
<kwmonroe> heh, rc2 is out
<kwmonroe> read your mail cory_fu!
<cory_fu> ha
<cory_fu> I guess I need to switch from --channel=beta to --channel=candidate
<kwmonroe> hey cory_fu, looks like this was fixed in rc1.  def fails in 2.0.3, but rc1+ looks ok:  http://paste.ubuntu.com/24016323/
<cory_fu> Good to know
<kwmonroe> yeah i guess good for you.. now i've got an undestroyable model hanging around.
<cory_fu> heh
<cory_fu> If it doesn't have any units, it's not really doing any harm.
<kwmonroe> it harms my ocd cory_fu
<cory_fu> :)
<ayan> does anyone here use the go 1.8rc3 package?  if so, how do you get /usr/bin/go to point to the right binary?  update-alternatives doesn't seem to know about it.
<stormmore> is it bad that I am using Ansible to boot strap my Juju environment?!?!
<kwmonroe> stormmore: you could shake magnets over a hard disk to bootstrap your juju env and i wouldn't be mad.
<kwmonroe> (as long as you don't ask for help)
<bdx> mbruzek, lazyPower: how can I specify a ssl key/cert for the kube-api-endpoint? It seems I need the fqdn of my kube-api-endpoint to exist in the SANS of my key/cert used on the kube api endpoint (kubernetes-master). The privately signed key/cert from easyrsa give kubernetes gives me  an "untrusted authority" error when trying to register workflows via kube-api-endpoint. I'm trying to specify a publicly signed
<bdx> wildcard cert there for my domain, but I think I need to add the kub SANS to it as well, via chain or something. Hoping to try and flesh out what my options are here ....
<bdx> oh my ... I just found the work around I think, there is an '--ssl-verify' bool arg that can be fed to deis
<bdx> http://paste.ubuntu.com/24016692/
<stormmore> kwmonroe, really I am having Ansible bootstrap MaaS as well as Juju since Juju requires an existing controller
#juju 2017-02-18
<mbruzek> bdx: there is an extra SANS section to the cert that you can specify. you will have to look at the code, but you may be able to insert the fqdn in the extra sans (if it is not already).
<mbruzek> bdx: to view the cert you can run a command to introspect the certificate or key.
<mbruzek> openssl x509 -text -in /usr/local/share/ca-certificates/easyrsa.crt
<mbruzek> bdx: The section you want to look for is X509v3 Subject Alternative Name
<mbruzek>  openssl x509 -text -in /var/lib/juju/agents/unit-easyrsa-0/charm/EasyRSA-3.0.1/pki/issued/kubernetes-master_0.crt
<bdx> mbruzek: so, I was able to get my fqdn into the certs by specifying it here https://jujucharms.com/u/containers/kubernetes-master/11#charm-config-dns_domain, then just setting the A record pointing to the master to kubernetes.mydomain.com
<mbruzek> oh yeah
<mbruzek> That
<bdx> the kubernetes charm will append the dns_name config to "kubernetes."
<mbruzek> Do that then!
<bdx> mbruzek: I did, but then I was still blocked by "unauthorized authority" error .... then I found the '--ssl-verify' option
<bdx> which I set to false
<bdx> and I now seem to be able to reach the endpoint successfully, but hit another trapped door :/
<bdx> https://imgur.com/a/6IMya
<bdx> mbruzek: thanks for your insight there
<catbus1> stokachu: Hi, conjure-up imports ubuntu .root.tar.gz images for novakvm, shouldn't it be ubuntu -disk1.img?
<catbus1> https://github.com/conjure-up/spells/blob/master/openstack-base/steps/share/glance.sh
<catbus1> https://jujucharms.com/openstack-base/
<catbus1> or is it the lxd images will work on kvm machines as well?
<ElikseN> NUDNO
#juju 2017-02-19
<narinder> from today onwasrds started seeing this error build: Unable to locate layer:openstack-api. Do you need to set LAYER_PATH?
<bdx> hey, charmstore is redirecting me to my user namespace when trying to access my team namespace
<bdx> https://bugs.launchpad.net/juju/+bug/1666065
<mup> Bug #1666065: charmstore redirecting to usernamespace  <juju:New> <https://launchpad.net/bugs/1666065>
<bdx> whoops accidentally filed for juju, heres the charmstore bug https://github.com/juju/charmstore/issues/714
#juju 2018-02-12
<gsimondo1> Got a 3 machine juju cloud that is currently stuck because it can't resolve a relation between influxdb and telegraf. This puts influxdb in error state. Resolving its state just puts it back in error state. The consequence is that I can't remove model, controller, machine, or anything that's above this unit/relation in stack.
<gsimondo1> Any advice on how to handle these situations?
<kjackal> hi gsimondo1, first file a bug with the proper section of juju debug-log showing the error. Then you can remove the machine where the failing application is deployed and that should unblock all the remove operations
<kjackal> let me check if there is a --force flag
<kjackal> yes, juju remove-machine <id> --force
<gsimondo1> kjackal: Just saw your message. I figured that out a couple of hours ago but that kills the machine. So I take it that this is a bug that's worth reporting on github?
<kjackal> gsimondo1: if you want to keep the machine you can try juju resolved with --no-retry flag
<gsimondo1> kjackal: doesn't help. it wakes up and reexecutes the failing relation. can't get rid of the relation.
<gsimondo1> 2018-02-12 13:25:55 DEBUG query-relation-joined TypeError: configure() missing 2 required positional arguments: 'username' and 'password'
<gsimondo1> 2018-02-12 13:25:55 ERROR juju.worker.uniter.operation runhook.go:113 hook "query-relation-joined" failed: exit status 1
<kjackal> gsimondo1: this is on the telegraf or the influx db side?
<kjackal> influxdb, sorry
<gsimondo1> root@kubernetes-2 ~ # tail -f /var/log/juju/unit-influxdb-0.log
<kjackal> gsimondo1: how did you deploy this charm?
<kjackal> I am looking for the revision
<kjackal> is it thisone: https://jujucharms.com/influxdb/13
<gsimondo1> kjackal: ehalilov@kubernetes-0 ~ $ juju deploy cs:~influxdb-charmers/influxdb
<gsimondo1> before that juju deploy telegraf
<gsimondo1> kjackal: the problem is of course that I was experimenting with relations that are not offered when you do 'juju add-relation telegraf influxdb'
<gsimondo1> kjackal: with something like 'juju remove-relation telegraf:juju-info influxdb:juju-info'
<gsimondo1> *add-relation, not remove
<kjackal> gsimondo1: the error you got says that configure() missing 2 required positional arguments: 'username' and 'password', this method is here: https://github.com/ChrisMacNaughton/interface-influxdb-api/blob/master/provides.py#L27  and the charm is calling this method here: https://git.launchpad.net/influxdb-charm/tree/reactive/influxdb.py#n261
<kjackal> so... this is a bug on the influxdb charm if I understand this correctly
<kjackal> gsimondo1: will you be able to open a ticket here: https://launchpad.net/influxdb-charm ?
<gsimondo1> kjackal: OK, I'll test a couple of other things and then open a ticket. I also have some other things failing in a similar fashion (relation related)
<kjackal> now, to get you unblocked.... how confident are you in your scripting skills? If you juju ssh in influxdb you can find the charm source under /var/lib/juju/unit-influxdb-0(probably)/charm
<gsimondo1> kjackal: correct me if I'm wrong but the main issue here is that I can't get rid of the failing relation... and it has effect on things like model, controller, machine, etc. Basically putting everything in state of paralysis and this is a bug that should be fixed on the level of juju or I'm missing something?
<gsimondo1> kjackal: OK let me take a look at that file. I'm software engineer turned devops so I can code.
<kjackal> awesome
<kjackal> so if you go to the reactive/influxdb.py and do something like an early "return" you will not be hitting this road block
<kjackal> gsimondo1: your issue is already reported: https://bugs.launchpad.net/influxdb-charm/+bug/1723334
<mup> Bug #1723334: influxdb-api relation breaks when related to telegraf charm <InfluxDB Charm:New> <https://launchpad.net/bugs/1723334>
<gsimondo1> mup: Yep
<mup> gsimondo1: I apologize, but I'm pretty strict about only responding to known commands.
<kjackal> mup: help
<mup> kjackal: Run "help <cmdname>" for details on: bug, contrib, echo, help, infer, issue, login, poke, register, run, sendraw, sms
<zeestrat> mup: help help
<mup> zeestrat: help [<cmdname>] â Displays available commands or details for a specific command.
<kjackal> mup: help sms
<mup> kjackal: sms <nick> <message ...> â Sends an SMS message.
<mup> kjackal: The configured LDAP directory is queried for a person with the provided IRC nick ("mozillaNickname") and a phone ("mobile") in international format (+NN...). The message sender must also be registered in the LDAP directory with the IRC nick in use.
<zeestrat> What a helpful fellow
<gsimondo1> kjackal: commenting the problematic line works but for sure not a solution to the problem
<kjackal> yes, this has to be fixxed upstream in the charm code
<gsimondo1> kjackal: I don't know people who use canonical k8s yet but it seems to me that it would be rather painful without custom charms
<kjackal> gsimondo1: are you deploying k8s?
<kjackal> I am with the team working on packaging k8s, can you elaborate a bit on the issues you faced?
<kjackal> gsimondo1: we have a set of add-on charms that do the monitoring and logging but they are not based on influxdb, we use prometheus
<gsimondo1> kjackal: creating a proof of concept but firstly trying to break juju as much as possible. the out of the box k8s "production" bundle provided installs fine but we would like to use components that do not have charms already or don't have them for xenial
<gsimondo1> kjackal: I want prometheus but I want to have influxdb for durable storage
<gsimondo1> kjackal: currently I use remote read/write that they offer for our current solution outside of k8s
<kjackal> would you be able to make a case for influxdb addition here: https://github.com/juju-solutions/bundle-canonical-kubernetes/issues
<kjackal> gsimondo1: this is the place where we gather all requests and bugs on canonical k8s
<gsimondo1> kjackal: the only problem is that remote read/write for prometheus and influxdb integration seems to be something that they want to solve by providing an ability to add influxdb as default storage for prometheus
<gsimondo1> kjackal: here's paul dix on that https://www.youtube.com/watch?v=BZkHlhautGk
<gsimondo1> kjackal: so building something for this remote read/write solution that may be superseded by a more superior solution that's part of prometheus code seems like a waste of time
<gsimondo1> kjackal: but I can create an issue if nothing then for discussion about this
<kjackal> that would be nice, thank you
<kjackal> gsimondo1: We keep kubernetes bundle addons here: https://github.com/conjure-up/spells/tree/master/canonical-kubernetes/addons
<kjackal> i guess graylog and prometheus are the most interesting
<gsimondo1> kjackal: so prometheus one is pretty much what I'm trying to test currently, just with influxdb in the mix for storage.
<gsimondo1> kjackal: outside of the bundle you provide of course
<gsimondo1> kjackal: the most dangerous thing I've encountered so far is the bug in these upstream charms that led to an impasse in my testing environment where I can't do anything due to some relation hook failing. my conclusion from that is that one has to test all the edge cases of charms used if they don't want to experience such paralyzing events in production systems
<gsimondo1> kjackal: again, not sure if juju should provide some kind of mechanism to exit the deadlock as you seem to suggest that charms should just work, if they don't then hot fix the code
<kwmonroe> gsimondo1: one thing you might be hitting is that events can queue up on the failing unit, so you'd have to run --no-retry multiple times.  for example, influx may fail on the query relation, but then have a peer or status event queue up behind that.   if the root cause of the error affects all those events, you'd have to run "juju resolved --no-retry influxdb/0" multiple times to get the unit back to a responsive state.
<kwmonroe> you could watch status with "juju debug-log -i influxdb/0 --tail" in one window while you're doing the --no-retry.  you should see juju progress past one hook, then potentially move to another that you have to pass with --no-retry, etc etc.
<gsimondo1> kwmonroe: useful, thanks for that. I'll test if that resolves the issue. So far I've been mitigating this by manually editing code to get out of the deadlock
<kwmonroe> gsimondo1: fyi, this feels like bug 1741506.  if we had a --force flag for juju remove-*, it could do the --no-retries automatically until the removal could be completed.
<mup> Bug #1741506: Removing a unit or application doesn't resolve errors <cli> <usability> <juju:Triaged> <https://launchpad.net/bugs/1741506>
<gsimondo1> kwmonroe: seems like it. also, from the point of view of UI, what you suggest is important because prompting people should be optional IMO
<kwmonroe> +1
<rick_h> bdx: howdy
<bobeo> rick_h: bdx So I deploye two instances of owncloud into juju,and I also deployed two instances of postgresql, and the databases have synced as expected, and are replicating as expected, however, owncloud did not, even though owncloud uses postgresql for user accounts and for data storage. Can anyone explain, or help me understand something that I missed?
<kwmonroe> bobeo: a quick look at the charm source (https://github.com/omnivector-solutions/layer-owncloud/blob/master/metadata.yaml) doesn't show any peer relations.  that means multiple owncloud instances won't do anything special when another comes along.  if you've deployed 2 of those, i'd bet that they both have their own db created and are working as 2 independent ownclouds.
<bobeo> kwmonroe: But it does seem that the postgresql that I deployed does have it. postgresql/0*  active    idle   4/lxd/6  10.0.0.42       5432/tcp  Live master (9.5.11) postgresql/1   active    idle   5/lxd/2  10.0.0.80       5432/tcp  Live secondary (9.5.11)
<bobeo> kwmonroe: The thing that confuses me, is that shouldnt the postgresql database be storing all of the configurations data, IE the databa OOOOO! if the webserver doesnt point to the data, it doesnt matter that its replicated!
<bobeo> kwmonroe: so what would you recommend I do to "make it" do the thing? Its so close to perfect for me. What are my options?
<kwmonroe> bobeo: what thing are you trying to do?  make it so that 2 owncloud units use the same database?
<bobeo> kwmonroe: as for owncloud to the db, I added the relation of postgresql:db owncloud:postgresql
<kwmonroe> right -- postgres does have a peer relation, so it will replicate amongst the cluster of postgres units.
<bobeo> kwmonroe: Yes, and that the data replicates in the database between the database clusters, specifically so that 2 servers have the data in two databases, so that if one physical server dies, services remain available.
<bobeo> kwmonroe: The idea is point haproxy forwarders to the local haproxy services, which point at the owncloud servers that share the same postgresql clusters
<bobeo> high availablity from top to bottom. I lost a django project today. Never again.
<bobeo> I lost almost a month worth of code progress, damn near cried, and quite literally might have bitten off something's head had anything live been within range at the moment of realization
<kwmonroe> ouch :(
<bobeo> kwmonroe: yea, especially because im new to django. The idea is build cool apps that solve real problems using django, decorate it using bootstrap and reactive. I was so close. less than 30 minutes away, and I would have finished my first successful django project.
<bobeo> kwmonroe: so im hoping this idea, combined with a gitlab to go with it, will prevent that from ever happening again. Plus itll allow me to share the code with others for peer review, as well as external input/assistance.
<kwmonroe> bobeo: since the postgres peering work is already done, the next piece would be to make owncloud aware of its peers.  i dunno what that entails, so you'd have to check the OC docs for whatever is needed to run a cluster of owncloud apps.  it may be as simple as just ensuring they all use the same db creds.
<kwmonroe> bobeo: i think the harder part will be for the object storage.  i'm assuming OC uses some actual filesystem to store objects and that they're not just blobs in a database.
<kwmonroe> so for that, you'd need to have a common NFS share, or s3 storage, or something so all the OC units could access the same backend data.
<bobeo> kwmonroe: I was under the impression since it didnt include using NFS that it was storing as BLOB in the postgresql db. How do I verify if thats the case? Ive never run into not knowing how the "data" is stored before?
<kwmonroe> bobeo: i see that bdx has storage on his todo list -- https://github.com/omnivector-solutions/layer-owncloud/blob/master/README.md#todo -- maybe sync with him to find out if there is any work in progress to help with.
<bobeo> kwmonroe: ahhh! ill do that. Im hoping to get this thing running sooner, rather than later. My soul still burns. My poor django baby. I dont see how developers carry on. It feels like I lost a piece of my very being.
<kwmonroe> bobeo: if you have an OC deployment handy, upload a relatively big file (100ish megs) and see where it goes on the unit.  i'm guessing /var/www/owncloud/data (which is the location that would need to be shared amongst your OC peers).
<kwmonroe> if it does jam it into postgres, you'll probably see /var/lib/postgres grow by 100ish megs
<kwmonroe> bobeo: looks like you can setup external storage through the GUI; https://doc.owncloud.org/server/10.0/admin_manual/configuration/files/external_storage_configuration_gui.html -- so if you get multiple OC units talking to the same database, then go in to each UI and configure external storage, that might get you where you want to be.  then you'd hopefully find a programatic way to do that config so the charm could do it for
<kwmonroe>  you next time.
<rick_h> kwmonroe: bobeo yea, you need the OC charm to elect a leader, pass it's pgsql connection to the non-leaders, and enable the leader sending object storage creds to the others as well
<rick_h> kwmonroe: bobeo the idea being that if a non-leader dies no biggie and if the leader goes down a new one will be elected and already have the details it needs
<rick_h> kwmonroe: bobeo it's a bit much to bite off as your first bits of charming and maybe you can send a note to the list and get some examples. All the ones I can think of would be openstack and rather large chunks to process
<bobeo> rick_h: That sounds like serious "OOoh dis gonna hurt" content to try to take on. I guess in the interum simply wait until my skill and comfort level builds? And use Git instead? Its mostly the code Im concerned about. Is that a feature gitlab supports?
<kwmonroe> bobeo: the leadership bits rick_h was referring to would need to be implemented in the owncloud charm.  the hard part of leadership is already handled by juju and layer-leadership.  the last mile is that piece in the OC charm that sets/gets config to ensure all OC units have the same info.
<kwmonroe> i'm not quite sure what you meant with "feature gitlab supports", but again, this would be up to the owncloud charm to coordinate the config amonst its peers.
<yosefrow_> anyone with knowledge of juju/openstack hear of a kolla juju bundle for openstack
<kwmonroe> the docs cover a bit about juju's leadership capabilities (https://jujucharms.com/docs/stable/developer-leadership) and the layer that owncloud would need to include has a bit more about how a reactive charm works with leadership (https://git.launchpad.net/layer-leadership/tree/README.md)
<kwmonroe> ^^ that was for bobeo, not you yosefrow_ :)
<yosefrow_> I had an conversation yesterday trying to convince someone to switch to juju for openstack, and they insisted that Kolla is the future for openstack deployments
<yosefrow_> i didnt really have a comeback
<yosefrow_> pretty much the conversation came down to why Kolla (openstack with dockers) is better than juju openstack bundle (which uses lxd)
<kwmonroe> yosefrow_: i don't have OS/Kolla exp, but typically when it comes down to docker vs lxd, it's good to consider what type of container is best for you.  docker (process) vs lxc (system) containers.
<kwmonroe> i like to ssh into things and poke around.  having /var/log/* and "normal" systemd / init functions make system containers easier for me to diagnose/debug.  ymmv of course.
<yosefrow_> kwmonroe, their argument was basically that lxd is dying and therefore not nearly as well supported as docker
<bobeo> yosefrow_: I would heavily disagree with them. Nothing is stopping you from installing magnum, which will allow you to use docker containers with openstack. You dont want docker containers for infrastructure, you want them for containers for projects. System containers allow you to push process containers, or containers in containers. You cant as easily do that with lxc in docker, but very easily with docker in lxc. VM in a VM arguement.
<bobeo> a type 1 into a type 2, but you can a type 2 into a type 1, which I have also done, and currently do.
<kwmonroe> did i miss a netcraft report?  yosefrow_, i'd love to see the "lxd is dying" source.
<yosefrow_> kwmonroe, if not dying, than not very popular
<yosefrow_> thats pretty much what they said
<yosefrow_> but i had no statistics
<bobeo> kwmonroe: No, you didnt. I speak with the netcraft people all the time. LXD is doing better than fine these days. Docker is overfluffed peacock fluffing its feathers again as always.
<yosefrow_> bobeo, their argument for container usage is that eventually they say openstack services will move to kubernetes because it allows seamless and 100% reliable deployment with pod failover features. Therefore, docker is the path forward.
<yosefrow_> Is there a solid basis to say that this is not hte case, and that system services should not be provided in docker containers?
<bobeo> yosefrow_: They are wrong. That feature already exists with Nova_Evactuate, which is also possible with Magnum. That feature has existed for years for Openstack. Not only are they full of it, they are also severely late to the game. I know kelsey personally, and he will tell you the same, as he has always told people. Kube has its purpose, so does docker, so does KVM, so does LXD.
<bobeo> yosefrow_: Yes. The first would be my personal experience, when I had to rebuild my entire openstack environment because 1 container crapped out. Unfortunately, it was my Keystone container, the only one you cant recover from if you dont have a proper HA deployment available.
<yosefrow_> bobeo, so the main issue with system containers is their tendency to fail, and failing to take into account that many system services cannot follow the recovery model, but must simply never go down ?
<bobeo> yosefrow_: The second would be any number of members from the kubenetes development team. They will directly tell you kube is designed as a devops tool, not as  an infrastructure tool. Its not designed with the robustness to maintain a high availability, high load, feature rich environment. Containers, especially docker, has been service driven in design. its designed to provide one specific purpose, and perform it well. Infrastrucuture 
<yosefrow_> Basically that the fact that kubernetes services can recover quickly is irrelevant because the services cannot afford to fail in the first place?
<bobeo> continues to require, a robust list of capabilities, and adapt on the fly requirements, which containers arent designed to do. They are designed to be depricated, not maintained.
<bobeo> yosefrow_: Service containers historically die, and are built around a model that thats acceptable. System containers are build around the idea of "available at all costs". One is designed for HA Enviroments to cover their weaknesses, one is designed to be the HA environment.
<yosefrow_> bobeo beautifully put. I like this distinction.
<bobeo> yosefrow_: also, LXC containers have a VERY LONG history of success, docker containers do not. They are very new comparatively, and people historically need to knwo things will work.
<yosefrow_> bobeo, do you have a link I can show the next guy who questions the viability or survivability of LXD as a container platform?
<bobeo> yosefrow_: Honestly, I would put docker inside of LXC containers, and benefit from both. Use kubernetes to manage the docker containers inside of LXC. Enjoy the benefits of LXC system performance and density improvements over KVM, and also enjoy the centralized management performance and mobility of Docker/Kubernetes. That is our current plan with our project. To migrate to that model.
<bobeo> Sure. Check out LXD vs KVM first though. Source is Canonical. Vancouver Event. It gives a good breakdown of what KVM is, and what LXC containers are. It gives a great insight into the performance difference as well.
<bobeo> yosefrow_: allows the other side to understand why lxc containers in the first place, and then see why LXC is very different from Docker containers.
<yosefrow_> bobeo, my main question in this context is whether or not the future of openstack services will be docker driven. From what you've said I've gathered that even if the capability for OS services via docker exists, its a bad idea because OS is too sensitive to change to rely on the Cattle Philosophy.
<bobeo> yosefrow_: a good way to structure the environment is think about it like Hypervisors vs Baremetal.
<bobeo> yosefrow_: Yes, that is correct. Docker isnt designed to support drastic OS shifts. its why the current issues with docker containers is patch management. The containers die when you patch many times, whereas LXC dont give a sh*t. Patch em 50 times in an hour, LXC is Honey Badger, LXC dont care.
<yosefrow_> bobeo, so you are aware of the challenges that projects like magnum/kolla are facing, and in your opinion they are simply barking up the wrong tree?
<bobeo> yosefrow_: Yes. Think about it from this perspective. Instead of looking at it from a Me vs. You Perspective, if we work together to Co-Exist, we are both able to play to our strengths and weaknesses better, rather than trying to focus on being better generalists in a moot point to replace each other that will surely fail.
<yosefrow_> bobeo, my interests are completely apolitical when it comes to solutions
<yosefrow_> This is the line that sticks out for me: <bobeo> yosefrow_: Service containers historically die, and are built around a model that thats acceptable. System containers are build around the idea of "available at all costs". One is designed for HA Enviroments to cover their weaknesses, one is designed to be the HA environment
<bobeo> yosefrow_: Open source devs, and projects need to realize co-existance is the best path to highest performance yield. I use a wide variety of projects that do the same thing to achieve a task, and pour my efforts and focus into interoperability, which is the easiest of dev tasks.
<yosefrow_> bobeo, my job is to analyze, distill, and understand the spirit of things
<yosefrow_> bobeo, I can sympathize and understand the frustration with entities that refuse to coexist as a symptom of feeling superior
<bobeo> yosefrow_: Thats exactly the issue. Every project has this crazy idea that it needs to be "superior" whatever that means. Open source isnt the Olympics, you dont get a Gold Medal or a Good Job Cookie for being the best.
<yosefrow_> bobeo, on the other hand, a good amount of ego feeds innovation
<bobeo> yosefrow_: To be honest, there is no "best", only "best for the job at hand". For Us, that means using MySQL, MongoDB, and PostgreSQL at the same time. Each better at something than the other, using HAProxy, NGinx, AND Apache to get the jobs done.
<bobeo> yosefrow_: Not really. Historically Ego has been toxic to innovation.
<bobeo> yosefrow_: If nothing else, ego is what drives your teams apart, and pushes them to create their own project, just to spite yours.
<yosefrow_> bobeo, I mean innovation requires a healthy amount of pride in what you do
<yosefrow_> if you dont believe in the solution you are building, why build it
<bobeo> yosefrow_: Its not in the best interest of the community. Yes, its excellent to take pride in what you do, but not to be proudful. Pride is the greatest path to self destruction.
<yosefrow_> I'm not saying to ignore the next guy, just to have enough pride to drive your passion forward
<bobeo> yosefrow_: absolutely dont ignore them, call them, hang out with them, share beers together. When projects merge, that when you see amazing things built. Almost all of our greatest OS tech came from projects merging. When great minds collide, miracles happen. Let passion be your guide, and you will find that you will fly faster than any rocket ever could.
<yosefrow_> If in fact someone is blinding themselves because of pride, that's of course destructive behavior
<yosefrow_> but if pride is driving someone forward causing them to believe they can do something better because they know they can, than I think its a good thing
<bobeo> yosefrow_: Yes, but the issue is pride can be bruised and beaten, passion cannot.
<yosefrow_> ok, I tried to play the devil's out advocate. I'm out of ammo xD
<yosefrow_> bobeo, well I think the shift towards a more cooperative community is happening already. But there are still pockets of resistance, where some people refuse to cooperate because of their own self importance.
<yosefrow_> bobeo, We will get there though. I'm sure
<bobeo> yosefrow_: Lol! I learned the hard way. My pride got so large it became a sun, creating its own gravity well, crushing everything I valued, blinding me, and pushing a lot of people I cared about away. I learned that life lesson the hard way, hopefully you wont have to. And yes, there will be hold outs. And they will burn, just like Rohan.
<yosefrow_> bobeo, been there and done that. I've been through the fires that lead to self awareness. I can't say I've completely discarded my pride. But at least I've become aware of it.
<yosefrow_> Either way, I agree with what you said earlier, that both docker and lxd have an important role to play in the future of openstack and cloud computing in general
<yosefrow_> I will take your advice with me to my next conversation. Thanks for all the tips :)
#juju 2018-02-13
<SuneK> Are any of you familiar with the postgresql charm?
<rick_h> SuneK: that's stub's baby
<rick_h> SuneK: I'd suggest asking and hanging around to get a reply
<SuneK> I was just wondering wether including postgis is as simple as just listing it in the extensions list of a relation-joined hook.
<SuneK> Or if it's even supported
<SuneK> Has anyone tried using the K8s vsphere cloud provider with the canonical kubernetes distribution
<SuneK> https://kubernetes.io/docs/getting-started-guides/vsphere/
<SuneK> Apparently it's required to move worker nodes to a specific folder in vsphere/vcenter. But I wonder how that will affect juju
<SuneK> And of course new units should also be put into that folder
<kjackal> hi SuneK kwmonroe might have more info on this. He should be online in a couple of hours
<SuneK> Thanks, lets see what he has to say :)
<SuneK> Taking a closer look, the folder is only required for k8s versions <= 1.8
<SuneK> kwmonroe: It look's like all I need to do to use the vsphere cloud provider for storage provisioning is to set disk.enableuuid=1 on the node VMs (and add the configuration to the master). I can easily set disk.enableuuid manually, but how can I ensure that it will be set on new nodes that I add?
<kwmonroe> SuneK: my vsphere/cdk expertise is currently limited to a simple deployment (which seemed to "just work" with conjure-up).  i'm working this week on persistent storage support for cdk on vsphere, but don't have anything for you to try at the moment.  that will include VM options like enableuuid, but for now, you'd have to set that manually on new nodes via the vsphere client.
<SuneK> kwmonroe: That's great news, is there somewhere I can follow along? the cdk repository on github or?
<SuneK> juju config
<SuneK> oups :)
<jhobbs> I'm using an overlay with juju deploy; where the bundle has a config value specified, I want to override that to just use the charm's default value.  what do i put in my overlay for that?
<maxiberta> hi; is this the right place to ask about https://api.staging.jujucharms.com/payment/v1 being down? (I'm trying "snap buy" on staging)
<rick_h> maxiberta: I'd hit up the internal channels as that's a staging issue
<maxiberta> wrong channel, thanks rick_h
#juju 2018-02-14
<keremasci> Hi All,
<SuneK> Hey
<keremasci> Hi,
<keremasci> is there anyone there ?
<keremasci> Helo
<keremasci> Helloooo
<thumper> people here, some just very async
<kerem> Is there anybody there?
<TheAbsentOne> A lot of people are here. :)
<icey> when issuing a `juju run`, what user is the command run under? I thought that it was root but I don't have permissions to create some directories, so now I'm curious :)
<blahdeblah> icey: ubuntu - same user as if you run 'juju ssh'.  If you want root, you can use sudo.
<icey> blahdeblah: yeah, that's what I'm doing now :); thanks!
<kwmonroe> arosales: happy valentines day:  bug 1749302
<mup> Bug #1749302: manual add-cloud should be more descriptive about the endpoint requirements <docteam> <juju:New> <https://launchpad.net/bugs/1749302>
<kwmonroe> better late than never ;)
<wpk> rogpeppe: re: my PR, I know; That's what I did at first, there's a deeper problem there with Mongo not joining replicaset (sometimes..)
<rogpeppe> wpk: sorry, i don't understand
<rogpeppe> wpk: how are the API addresses related to mongo replica sets?
<rogpeppe> wpk: it's just that the code that i saw just looked fragile to me.
<wpk> rogpeppe: At first I did it your way and it failed (with mongos on HA nodes not joining replicaset), I got to think that those addresses are used for mongo too. Then I did it the way in PR and it worked. Then I tested it again and it didn't
<wpk> there's a weird race somewhere, currently working on it
<rogpeppe> wpk: hmm, looks like there's a deeper issue there that you're papering over... :)
<rogpeppe> wpk: i don't understand why you can't just use all the addresses anyway
<rogpeppe> wpk: ah, because you don't want to accidentally connect to a remote node i guess
<wpk> rogpeppe: Yes
<wpk> rogpeppe: since that caused problems before - that's why we're forcing localhost to be first on the list, but it doesn't help in all cases as apicaller can be up before apiserver
<rogpeppe> wpk: you could filter the addresses rather than just choosing the first one (filter all addresses that aren't localhost addresses)
<rogpeppe> wpk: that would be more robust (and i think it would make sense in the code too)
<wpk> rogpeppe: For controller I'll just give one address in APIInfo.Addrs
<wpk> (just like you suggested, as this was my first thought too)
<wpk> But for now I need to figure out the deeper issue
<rogpeppe> wpk: that's fine too.
<rogpeppe> wpk: yeah. replica sets can be unreliable
<rogpeppe> wpk: although i don't think we derive API addresses from mongo replicaset addresses do we?
<wpk> rogpeppe: we don't
<rogpeppe> wpk: ah, i guess that if the API server can't connect to its local mongod, it won't be up and therefore nothing will be able to connect to it
<wpk> rogpeppe: the final symptom is that we can't connect to local mongo (as it's not in replicaset), so we can't start state, and since we're only connecting to ourselves we can't start apicaller and the rest of stuff
<wpk> rogpeppe: and I guess there's something to prevent it if we can connect to other controllers somehow
<rogpeppe> wpk: in that case, i think just filtering for localhost addresses should work fine (and it's a fairly local change)
<rogpeppe> wpk: network.SelectInternalHostPort(addrs, true) might do the job
<rogpeppe> wpk: or probably SelectInternalHostPorts, although there shouldn't be a difference
 * rogpeppe isn't very keen on the juju/network package
<wpk> rogpeppe: the problem is somewhere else
<wpk> Doesn't work: juju bootstrap aws --no-gui && juju switch controller && juju model-config logging-config="<root>=DEBUG" && juju enable-ha
<wpk> Works: juju bootstrap aws --no-gui && juju switch controller && juju model-config logging-config="<root>=DEBUG;juju.mongo=TRACE" && juju enable-ha
<arosales> kwmonroe: you know just how to sweet talk me.
<kwmonroe> heh
<ejat> hi, im facing neutron-gateway/0: hook failed: "config-changed" ... im using openstack-telemetry bundle with MAAS. Any advise how should i resolve the charm
<kwmonroe> ejat: how is it failing?  juju debug-log -i neutron-gateway/0 --replay | pastebinit
#juju 2018-02-15
<SuneK> How do i get the logs of a specific snap running on a juju instance (kube-apiserver in this case)
<SuneK> I found out eventually. (It's in syslog)
<magicaltrout> alright folks
<magicaltrout> kjackal or someone. Is there something I can prod from within an app to find out if my app is running within a charm?
<magicaltrout> scrap that found what i want
<kjackal> SuneK: yes these are systemd services. journalctl should get you the logs
<kjackal> magicaltrout: you can do a while true loop and try try to hear the fan :)
<magicaltrout> :P
<kjackal> lol
<SuneK> Where can I find the repos for the snaps used in the CDK?
<kjackal> SuneK: just a sec let me llok for them
<kjackal> SuneK: they are on the Global store, not sure what should I get you from there...
<kjackal> I see them when I get here: https://dashboard.snapcraft.io/snaps/
<kjackal> but thats probably because i have proper permissions...
<SuneK> Yeah I found them there, but the developer website references an email. I would like to look into their definitions and eventually modify them
<kjackal> for example: https://dashboard.snapcraft.io/snaps/kube-scheduler/
<SuneK> Hmm I get a 404 when I go there
<SuneK> Do I need special permissions?
<kjackal> Can you give us some more info on what you want to do?
<kjackal> Do you want to see the source? How these snaps are created?
<kjackal> I can point you to the github repos
<kjackal> SuneK: ^
<SuneK> Exactly, I want to make the vsphere cloud provider setup easier in CDK. Github repos would be excellent
<kjackal> ok just a sec
<kjackal> SuneK:  here is one: https://github.com/juju-solutions/cdk-addons
<kjackal> https://github.com/juju-solutions/release/tree/rye/snaps/snap
<SuneK> Thanks kjackal!
<kjackal> hi elmaciej!
<elmaciej> hi kjackal!
<kjackal> elmaciej: I am eu based so you can reach me your morning hours
<elmaciej> kjackal: Thanks a lot. I thought that maybe they will make decision today for this workshop but looks like it will take time
<magicaltrout> does greece count as europe?
<kjackal> lol eu for sure magicaltrout :p
<kjackal> jealous magicaltrout
<kjackal> ?
<magicaltrout> sob
<magicaltrout> i might move in
<magicaltrout> i'm very good a striking
<kjackal> You are always wellcomed but I though you were moving to LA
<magicaltrout> ha
<magicaltrout> trying to sell my house currently actually
<magicaltrout> the new potential buyer are fscking rubbish
<kjackal> you are in a bad shape if you have to do fscking
<magicaltrout> at least the UK can afford sufficient bandwidth to keep me online though kjackal
<utking> How can i re-deploy a failed envirnoment?
<rick_h> utking: what failed and you want to deploy a new model or do you mean to retry what failed?
<knobby> magicaltrout: keep the gloves above the waist!
<magicaltrout> hehe
<magicaltrout> kjackal gives as good as he gets
<knobby> this is true
<utking> I want to retry a model that failed :)
<utking> I'm deploying openstack, and it selected the wrong machine, so i have to do it again
<utking> with maas that is
<knobby> you could use --to in order to select the machine
<utking> well i kind of removed all the machines, so i just need to re-deploy the model
<utking> can i use --to in the gui?
<knobby> you can in a bundle I know, but I'm not sure about the ui
<utking> or do i have to that specifically on the one service (cinder) that i'm deploying?
<knobby> did you destroy the model or remove the applications?
<utking> hmm
<utking> no, none of it, it took some time to set up the model the way i wanted to
<knobby> when I deployed I altered my bundle to use --to
<utking> where do you do that?
<knobby> so you can just add-unit myapp --to machine
<knobby> `juju add-unit myapp --to machine`
<utking> ah ok :) sounds nice
<utking> even though that app is a part of a big model?
<knobby> you'd have to find the bundle you were deploying and download it and edit it. Then you juju deploy that filename instead of your original bundle. Using the gui though, I don't know how you would do that, but I imagine you can...
<knobby> yeah, you'd need to add all the units back that you need for the whole model
<utking> hmm, ok
<knobby> so if you have 2 kubernetes-worker and 1 kubernetes-master(I know you don't with openstack, but I don't know the app names for that), you would run add-unit twice for kubernetes-worker and once for kubernetes-master
<utking> ah ok, i'll guess i'll try that then
<utking> but is there no way of just re-deploying the model i have?
<kjackal> I do not know what juju magic you pulled out here magicaltrout but I managed to get quassel back on!
<utking> like release the machines, and do it once more
<knobby> I don't know of a way to do a deploy to a specific model, but it certainly sounds like something you could do...let me look at the docs
<utking> sure, that would be great! :)
<knobby> utking: looks like you can do a juju deploy -m model_name to deploy to a specific model
<utking> i tried that
<utking> i just get no charm or bundle specified
<knobby> --map-machines exists too in order to select what machines
<knobby> oh it would be something like `juju deploy -m my_model openstack`, but then you'd get random machines...I think you need --map-machines to do what you want
<utking> hmm, in where in that sentence? ^^
<utking> guess i'll read the docs some more
<utking> pretty new to juju and maas
<knobby> so maybe `juju deploy -m my_model --map-machines 0=5,1=2,2=1,3=4 openstack-base` if you wanted machine 0 from the bundle to use your machine 5, which ends up with neutron-gateway, etc
<knobby> I found the machine numbers and what mapped to them here: https://api.jujucharms.com/charmstore/v5/openstack-base/archive/bundle.yaml
<elmaciej> Hi! I'm trying to relate rabbitmq and ceph-mon, when I do it it is failing as it's trying to insert rbd module which exist
<elmaciej> should I use specific charm for rabbit on ceph based openstack installation ? I'm doing pike
<elmaciej> ceph-relation-changed subprocess.CalledProcessError: Command '['modprobe', 'rbd']' returned non-zero exit status 1
<knobby> sorry, utking, it would be --map-machines=existing,0=5,1=2,2=1,3=4
<utking> oh wow! thanks knobby! I'll try that out, you've been an angel :)
<kjackal> elmaciej: that might be an isse with the cepf relation
<elmaciej> kjackal: it looks like, I'll try to deploy rabbit outside of the container, I have ceph-mon running without container too
<elmaciej> wondering if I need a relation between ceph and rabbit
<kjackal> elmaciej: there are some restrictions when trying to modprobe inside a container
<kjackal> the problem is that the kernel is owned by the host operating system and it might not allow for module loading
<elmaciej> well it should check if the module is not loaded as it is already loaded
<kjackal> fair point, if you open an issue on the charm code the devs will eventualy get to it
<kjackal> you could also submit a PR
<kjackal> or even "fork" the charm under your namespace
<elmaciej> well I usually pull the charm and rebuild if it's quick fix :)
<kjackal> you do charm pull or do you clone the charm layer and then do a charm build?
<elmaciej> charm pull, then fix, then charm build and install local charm
<kjackal> which charm are you using?
<elmaciej> kjackal: I found proper way, none of them can be put in container and then looks good. testing now and let now if any issue appear. Thanks!
#juju 2018-02-16
<utking> I'm having some problems with my model
<utking> in juju that is
<utking> When i export the finished model, and then try to import later, i just a bunch of relations erros, and the model won't import
<utking> any ideas?
<kjackal> utking: can you pastebin the errors? Never hit this problem but knowing the error is vital in offering any opinion :)
<utking> Hey again guys!
<utking> We've been struggling for some days now with juju and openstack
<utking> If i export a model, i get a error when i import it again
<utking> https://imgur.com/a/S2ksa
<utking> any ideas to why?
<knobby> utking: I think we'd need to see the bundle
<rick_h> utking: looks like the export failed to include the machine definition? and not sure on the relation bits with $ isn't legit so not sure what that's supposed to be
<utking> The only thing i did was add the openstack base from the charm store without editing anything
<utking> then export, and import again
<rick_h> frankban: ^ might have an export bug in the GUI. I'm not able to try it ATM.
<SuneK> kwmonroe: if it has any interest I did a small writeup of how i got  vsphere cloud storage provisioning working. https://sunekjaergaard.blogspot.dk/2018/02/making-canonical-distribution-of.html
<rick_h> SuneK: awesome!
<frankban> utking: thanks for reporting, we'll verify
<utking> Great! haven't done anything specifically, just bootstrapped through maas
<magicaltrout> any CDK devs ever seen
<magicaltrout> Unknown desc = NetworkPlugin cni failed to set up pod "ta2-109a5e50848f44fc9c01d3b894a9fcb8-search-4p2hw_default" network: failed to allocate for range 0: no IP addresses available in range set: 10.1.14.1-10.1.14.254
<magicaltrout> when you've only got a few pods running?
<magicaltrout> using flannel
<magicaltrout> why can't i find the page telling me how to upload resources
<magicaltrout> ah
<kwmonroe> magicaltrout: juju attach?
<magicaltrout> na i was trying to find the charm push blah stuff
<kwmonroe> magicaltrout: for the networking bit, is your 10.1.14 range stomping on your provider's range?
<magicaltrout> my google fu failed
<kwmonroe> yeah, charm push blah, charm release resource-<n>
<magicaltrout> https://github.com/kubernetes/kubernetes/issues/57280
<magicaltrout> i dug up that
<magicaltrout> which i think is probably related
<magicaltrout> chap says its a flannel GC issue supposedly
<magicaltrout> i do know a clear out generally solves it
<magicaltrout> but i see it pretty frequently
#juju 2020-02-10
<thumper> very quiet monday here
<ec0> now you've done it
<hpidcock> wallyworld: for the grade field in the snapcraft.yaml, should this be stable or devel depending on if it's a release snap? see https://snapcraft.io/docs/snapcraft-yaml-reference and also your pr https://github.com/juju/juju/pull/11138/files
<hpidcock> do we currently just manually build the dev snaps?
<hpidcock> current release scripts just use default grade: to stable
<hpidcock> I guess I could sed the snapcraft.yaml to change it
<achilleasa> can I get a quick CR on this tiny PR? https://github.com/juju/charmrepo/pull/160
<nammn_de> achilleasa: can take a look
<nammn_de> stickupkid: up for a 5 min hangout regarding the race conditions a mentioned? May need a pointer or an idea
<stickupkid> nammn_de, daily
<stickupkid> achilleasa, we should ping wgrant about this
<achilleasa> stickupkid: I will when the change lands in juju 2.7 branch
<achilleasa> stickupkid: have you brought in your charmrepo changes from last week?
<achilleasa> stickupkid: arghh.. 2.7 still uses charmrepo.v3; gotta backport the change
<nammn_de> stickupkid: https://github.com/juju/juju/pull/11198
<stickupkid> nammn_de, done
<nammn_de> stickupkid: ta! I see you are working on the not finding artefacts thing on lxd. Any updates on that one? This one seems to be a great help to me.  Need a helping hand?
<stickupkid> nammn_de, ah, not actively atm
<nammn_de> stickupkid: small pointers for me, so that I can take that over?
<stickupkid> nammn_de, when the test suite bombs, we need to work out why we don't create a output tar.gz file
<nammn_de> stickupkid: got it, lookin
<achilleasa> nammn_de: or stickupkid quick CR for bringing charmrepo changes into juju: https://github.com/juju/juju/pull/11199 and https://github.com/juju/juju/pull/11200
<nammn_de> achilleasa: approved
<achilleasa> stickupkid: have you seen this error before? https://github.com/juju/juju/pull/11200/checks?check_run_id=435919954
<achilleasa> is charmrepo.v4 gomod-only now?
<stickupkid> achilleasa, why does dep care about gomod though
<stickupkid> achilleasa, shouldn't it just be a code repo?
<achilleasa> stickupkid: yeah... don't get it. The juju .lock file has been updated but the digest looks empty: https://github.com/juju/juju/pull/11200/files#diff-bd247e83efc3c45ae9e8c47233249f18R1436
<stickupkid> haha, lol wat
<stickupkid> can you rebuild it?
<achilleasa> Let me try with modules off...
<achilleasa> stickupkid: second time worked... go figure
<rick_h> morning party folks
<manadart> Morning rick_h
<achilleasa> nammn_de: I think we have a flaky test (TestConstraintsOpsForSpaceNameChange). Was that added by the recent work for space renaming? If that's yours you may want to throw in a strings.Sort prior to comparing https://paste.ubuntu.com/p/pnmnxrdBpQ/
<nammn_de> achilleasa: yeah thanks! Added that in the next spaces patch
<manadart> achilleasa nammn_de: Or use jc.SameContents.
<achilleasa> manadart: thanks for the tip; didn't know that was a thing!
<nammn_de> manadart achilleasa: sorted it was put in the bigger patch here: https://github.com/juju/juju/pull/11186/files if pressuring I can exclude and put in seperate patch
<nammn_de> manadart: ohh did not know that one
<stickupkid> rick_h, so the schema tests won't work because both juju 2.7 and develop target the pylib master branch and we don't know if there are breaking changes
<stickupkid> rick_h, the easy fix is to add branches back to pylib
<rick_h> stickupkid:  boooooo, maybe
<stickupkid> manadart, nammn_de maybe we should just push all artifacts to s3
<stickupkid> be done with it
<manadart> stickupkid: Sounds like a plan.
<nammn_de> stickupkid: that sounds like something great.  I can look into that next ci day (may need here and there some consideration points)
<stickupkid> nammn_de, we'll still need to get it out of the container though...
<nammn_de> stickupkid: yeah thats something i am looking today, if not next week =D
<stickupkid> rick_h, ho?
<rick_h> stickupkid:  /me jumps in
<rick_h> oh wait, that was 30min ago sorry
<rick_h> stickupkid:  what's up?
<stickupkid> in ho:)
<timClicks> morning juju
<rick_h> morning timClicks
<hml> howdy timClicks
<timClicks> it looks like we're starting to see more and more lively discussions on discourse
<rick_h> chit-chat
<timClicks> where is the source code for the k8s charms in https://jaas.ai/u/juju/#charms? we should flesh out their readmes
<babbageclunk> easy review for someone? https://github.com/juju/juju/pull/11202
<timClicks> babbageclunk: love those
<timClicks> babbageclunk: the old shortwait to longwait switcheroo
<babbageclunk> timClicks: ha, thanks!
<timClicks> remind me again why there's no default model provided with k8s clouds? I did know once
<hpidcock> default is a namespace that exists in a kube cluster
<hpidcock> and may be used by stuff already deployed in a cluster
<hpidcock> so we
<hpidcock> would need a new name
<hpidcock> easier to just not create one atm
<hpidcock> or if we did, the default model would need special behaviour so it doesn't create the namespace nor delete it. It would also have to be able to play nice with what is already in there/share with another juju controller etc
<hpidcock> just too much to deal with.
<timClicks> hpidcock: I thought it was to do with the namespace issue
<hpidcock> yeah, thats the root of the issue
<hpidcock> but whatever way you try to work around it isn
<hpidcock> 't nice
<hpidcock> a user could delete the namespace then create a model called default
<hpidcock> but that still has issues
#juju 2020-02-11
<babbageclunk> anastasiamac: https://github.com/juju/juju-restore/pull/1 - I can't request a review from you for some reason, maybe you need to be made a repo owner or something?
<anastasiamac> \o/
<anastasiamac> just saw :)
 * anastasiamac looking
<babbageclunk> thanks!
<timClicks> anastasiamac: could you please test the vids on this page again for me please? https://jaas.ai/docs/microk8s-cloud
<anastasiamac> timClicks: in ff, all good
<anastasiamac> timClicks: in chrome, i get a line-height black space :(
<timClicks> :/
<anastasiamac> did u set a concrete height?
<timClicks> no, Discourse is writing the HTML itself
<timClicks> it looks like i'll need to write HTML manually
<anastasiamac> i wonder if it's html or if we can modify css?... mayb worthwhile reaching out to ppl who wrote discourse?...
<anastasiamac> babbageclunk: i agree with u re: comments that are obvious like // Name contains the name
<anastasiamac> babbageclunk: but we've had these discussions previously on the project too and got overruled...
<anastasiamac> i'd rather we did the agreed standard here so htat we can use it as a sample code, say
<hpidcock> babbageclunk: anastasiamac: although I agree with this, the example of `// Name is the name` is definitely just writing the obvious. Names quite often have constraints e.g. `// Name of the Person and must conform to the regex blabla`. Even explain the value can't be empty etc. There is always something to say about an exported value or function.
<hpidcock> this applies to all values
<anastasiamac> i agree nand in fact like the idea of naming variables in an onvious manner to avoid writing comments...
<anastasiamac> but we've had discussions in the past where the consensus was to always comment exported stuff even when the naming is obvious
<babbageclunk> I don't know what else I'd write for ReplicaSet.Name, other than `// Name of the replica set` - if there are interesting constraints, I don't know them.
<anastasiamac> nad to be honest, the constraints on the name of the replicaset maybe defined elsewhere...
<hpidcock> `if there are interesting constraints, I don't know them` probably is a sign that you don't know enough about that value
<babbageclunk> hpidcock: What would you write for that? This is for reading the replica set from the db - it's not for creating them.
<hpidcock> anastasiamac: yep, and probably you can refer elsewhere
<anastasiamac> if we wanted to be particularly difficult we could say 'name of the replicaset as known to juju/db'... but realisitically u probably can only say 'name contains name'
<hpidcock> `// Name of the replica set` is fine better than `// Name is the name`
<babbageclunk> I think adding a link to the Mongo docs for the replica set struct makes sense, but I don't think it helps anyone to add individual field docs for things that are straightforward.
<anastasiamac> realistically this tool will not be used by other non-juju users
<anastasiamac> it'll only be used with juju setup.. that means that the replicaset name does not need to refer to mongo doc but to juju doc, surely
<anastasiamac> and in juju the replicaset is always named 'juju" i think... it's hardcoded.. no?
<hpidcock> yeah all I'm saying is sometimes we just write the simple comment thinking it is straightforward, but when something stands out like that, it's worth trying to see from the perspective of someone reading that might not be obvious
<anastasiamac> fwiw m not keen on commenting obvious names but the consensus is the consensus.. mayb we should discuss at the next x-team
<anastasiamac> +1
<babbageclunk> hpidcock: yeah, but I don't think `// Name of the replica set` is net positive. It's just noise that takes up valuable screen space. Definitely if there's anything to say about the field it's worth doing.
<anastasiamac> babbageclunk: can u ho? m struggling with model vs environment in backup...
<hpidcock> it's just a general Go consensus to comment exported things
<babbageclunk> sure
<anastasiamac> stdup?
<tlm> develop -> 2.7 if anyone is free https://github.com/juju/juju/pull/11203
<tlm> sorry 2.7 -> develop
<timClicks> new juju post up on the ubuntu blog https://ubuntu.com/blog/devops-tools-in-2020-why-consider-juju
<nammn_de> manadart, achilleasa: cr for a smaller patch? https://github.com/juju/juju/pull/11195
<nammn_de> manadart: I commented with background information on your comment https://github.com/juju/juju/pull/11186 happy to exchange for review/qa from my side :D
<stickupkid> manadart, quick HO?
<stickupkid> is he around, or was he taking today off
<stickupkid> guess I'm going to have to figure out if a networkID string is the same as corenetwork.Id
<stickupkid> one way to find out
 * stickupkid answers his own question, yes they do
<nammn_de> stickupkid: yeah just saw in calendar, seems like hes taking of today. Vaguely remembering that he mentioned that he may or may not take off today
<nammn_de> in that case for the smaller patch stickupkid achilleasa:  mind smaller cr? https://github.com/juju/juju/pull/11195  manadart already took a look and gave a comment.  I am not 100% sure whether he and I were on the same page there. Maybe one of you can take a look and tell me that I may have overlooked something?
<achilleasa> nammn_de: looking
<nammn_de> achilleasa: in past we only made it possible to show-space per name, where we searched the  `name` field in the database. New patch enables search in `name` and `spaceid`
<achilleasa> nammn_de: I am curious, what's the rationale behind displaying the space IDs? IIRC we wanted to avoid exposing internal IDs to the operators
<achilleasa> (e.g. the rebind command works with space names and translation happens server-side)
<nammn_de> achilleasa: displaying in show-space output? was part of the usage spec.  juju list-spaces does this as well
<nammn_de> or do you mean searching by ID?
<achilleasa> both actually...
<nammn_de> the first: was following the spec provided here https://docs.google.com/document/d/1tE5GMF9Uw8W7QQoybgA_vs_RqCHQdbKXmQT_Kaaj6Pk/edit and following list-spaces.
<nammn_de> 2: rick_h opened a launchpad and said it makes sense to search for id
<nammn_de> https://bugs.launchpad.net/juju/+bug/1862238
<mup> Bug #1862238: show-space quotes integer and doesn't work with id as argument <juju:Triaged by nammn> <https://launchpad.net/bugs/1862238>
<nammn_de> achilleasa: thanks for the input! yeah its not 100% clear. We may want to wait for rick_h and see whats his input is
<manadart> stickupkid nammn_de: I am off today.
<stickupkid> manadart, yeah we know :)
<nammn_de> manadart: ah sure, have a nice day!
<stickupkid> manadart, go away :p
<nammn_de> manadart: have fun with the life outside juju
<manadart> stickupkid: I need to fix a test, but if you have time please take a look at https://github.com/juju/juju/pull/11194
<manadart> L8r.
<nammn_de> stickupkid: Im thinking about removing the tests altogether we were talking about yesterday.
<nammn_de> https://github.com/juju/juju/blob/71c2a5a089bc34bbde192c785b96193d1aaa97fa/cmd/juju/status/status_internal_test.go#L6104-L6103
<nammn_de> stickupkid: as we already have them as bash based test, which run as expected. Having them in this state based test suite is not winning that much IMO. They are not unit test so somehow kind of on a similiar level to the bash one which already exist. What do you think?
<nammn_de> btw. i run it locally with count to see where it fails... and it does not fail where we hoped it would fail :/
<stickupkid> nammn_de, hmm, thinking...
<nammn_de> alternative solution would be to poll and retry. But that would be even more similiar to the bash based one ..
<stickupkid> nammn_de, does it fail on CI yet?
<rick_h> achilleasa:  nammn_de the main reason to expose the space id is fear that we don't catch translating it everywhere and somewhere the logs would show "id 6" and users would be stuck figuring out wtf that was
<rick_h> basically an escape hatch
<rick_h> then when you run `juju spaces` you see this id and lots of folks would think to use an id as a key to get info
<rick_h> achilleasa:  nammn_de but I didn't realize "0" would be a valid space name...ugh
<rick_h> ...and we can't control that as we get the names from MAAS...
<rick_h> show both? it's just YAML :)
<achilleasa> rick_h: however, this would make the operators assume that you can safely swap ID/name with all space-related commands (could cause confusion with bind)
<rick_h> achilleasa:  hmmm, fair.
<rick_h> achilleasa:  nammn_de do we know for sure what the names allowances are in MAAS?
<rick_h> in juju we use the names package which doesn't allow starting with an int or having a single character if I recall so curious what the exact rules are in maas
<achilleasa> rick_h: see my comment in nammn_de's PR. This is the space validation regex: https://github.com/juju/names/blob/v3/space.go#L13
<achilleasa> rick_h: and this is the maas -> juju space conversion logic: https://github.com/juju/juju/blob/develop/core/network/space.go#L207
<nammn_de> stickupkid: the "unit" test? yes, sometimes. It failed from around 50 runs, around twice?
<nammn_de> rick_h achilleasa: we can ho about this as well?
<nammn_de> or wait until tomorrow for manadart and then do the ho
<rick_h> achilleasa:  nammn_de sorry, otp atm
<nammn_de> rick_h: sure no worries. Can be done later today or tomorrow then
<nammn_de> achilleasa: yeah afaict, space ints are valid names as well
<stickupkid> nammn_de, but what failed?
<stickupkid> nammn_de, the output?
<nammn_de> ah, sry, yes the output. The check I added before (check whether activeBranch is to bla) was successfull. The check after that failed.
<nammn_de> *stickupkid^
<stickupkid> ah, so it's not the controller store, this is in the command then
<nammn_de> stickupkid: this fails: https://github.com/juju/juju/blob/71c2a5a089bc34bbde192c785b96193d1aaa97fa/cmd/juju/status/status_internal_test.go#L6118 not the one above
<nammn_de> yeah its somehow the command
<nammn_de> should be noted.
<nammn_de> stickupkid: my "findings" https://trello.com/c/KG4UJj20/2324-find-out-why-cmd-status-branches-is-flaky
<nammn_de> stickupkid: added a discourse post which can be updated for later. https://discourse.jujucharms.com/t/flaky-ci-tests/2621
<nammn_de> rick_h: pr was updated to only include documentation update https://github.com/juju/juju/pull/11195/files
<rick_h> nammn_de:  cool ty
<rick_h> nammn_de:  +1'd
<stickupkid> hml, PR of backporting error and logging info https://github.com/juju/juju/pull/11205
<bloodearnest> o/ folks - my localhost controller strugging to providion lxd's, I get this in the logs:
<bloodearnest> 2020-02-11 15:38:52 WARNING juju.apiserver.provisioner provisioninginfo.go:609 failed to save published image metadata: missing region: metadata for image  not valid
<rick_h> bloodearnest:  juju version, local os series version, and lxd version?
<stickupkid> bloodearnest, can you run this cat ~/.local/share/juju/clouds.yaml
<bloodearnest> rick_h: juju 2.7.1-bionic-amd64 (from snap)
<bloodearnest> host is bionic
<bloodearnest> lxd is 3.20 from snap
<bloodearnest> stickupkid: no suck file
<bloodearnest> such*
<rick_h> bloodearnest:  and this is juju bootstrap localhost?
<bloodearnest> rick_h: yeah, I ripped down the controller and redeployed, on 2.7.1 now
<rick_h> bloodearnest:  can you run it with --debug and paste the output please?
 * bloodearnest hangs head in shame
<rick_h> uh oh
<bloodearnest> somehow deployed a service with no units
<rick_h> bloodearnest:  oh...interesting
<bloodearnest> misread the waiting as failing to get an lxd provisioned
<rick_h> bloodearnest:  bundle without any units in the definition?
<bloodearnest> rick_h: yep, was doing some yaml munging to get a cut down env, and overmunged
<bloodearnest> rick_h: stickupkid: thanks for the quick response, sorry for the noise!
<stickupkid> bloodearnest, not a problem, why we're here :)
<hml> achilleasa:  ping
<achilleasa> hml: here
<hml> achilleasa: looking for a name for the âerror on no valueâ thing.  but thatâs a long name for a flag.  âshow-errors?  eh?  ideas?
<achilleasa> hmmm
<achilleasa> hml: how about --strict? "Return an error if the requested key does not exist"
<hml> achilleasa:  hrmâ¦
<hml> achilleasa:  okay
<hml> ty
<evhan> Hi folks, does anyone have any tricks for troubleshooting a first deployment of a charm stuck in status "unknown"?
<evhan> Seeing this with a local charm as well as mariadb-k8s.
<evhan> Nothing in `juju debug-log` for that application.
<babbageclunk> evhan: is the machine up?
<evhan> I have logs from other applications in there, working fine.
<evhan> Hmmm, but `juju run --application ...` doesn't connect.
<babbageclunk> evhan: I'd ssh to the machine and see whether the unit agent is running
<babbageclunk> (I mean, `juju ssh` to the machine)
<babbageclunk> ah, hang on - this is k8s - sorry, I'm not the best person to answer this then :(
<evhan> That's OK, I think there's something else going on with the environment, other tools throwing a fit as well.
<tlm> If it's k8's I would take a look at the status of the deploy pods
<evhan> Yeah, sorry, it was nothing to do with juju.
<timClicks> evhan: are you sorted?
<evhan> Yeah, sorry, it was CoreDNS going wobbly but not application-related.
#juju 2020-02-12
<babbageclunk> anastasiamac: ok, https://github.com/juju/juju-restore/pull/1 is ready for another look (I guess after standup)
<anastasiamac> babbageclunk: will do soon \o/
<babbageclunk> thanks
<wallyworld> kelvinliu: we can land that packaging PR on libjuju
<kelvinliu> wallyworld: yep
<thumper> PR ready for the introduction of the model summary watcher to the cache: https://github.com/juju/juju/pull/11208
<thumper> and a much simpler one to update a dependency: https://github.com/juju/juju/pull/11209
 * anastasiamac lloking at 11209
<timClicks> thumper: did you want to catch up for a few minutes now that those PRs have landed?
<timClicks> wallyworld, kelvinliu, babbageclunk: found the other issue that a user has had w/ zone contraints https://discourse.jujucharms.com/t/using-vmware-vsphere-with-juju/1099/3?u=timclicks
<timClicks> perhaps that's the same one
<thumper> timClicks: sure, I have 15 minutes now
<kelvinliu> yeah, the same one
<thumper> jam: with you shortly, just trying to finish up a code review
<jam> ok
<babbageclunk> does anyone know of other repos of ours that use go modules?
<wallyworld> babbageclunk: charm.v4 i think
<wallyworld> some of the upstream juju ones
<wallyworld> charm.v6 i mean
<wallyworld> charmrepo.v4
<babbageclunk> ok, thanks wallyworld
<babbageclunk> weirdly, our build script for charm.v6 still installs go 1.10
<babbageclunk> I don't get how that works
<wallyworld> we'd still use dependencies.tsv in that case i'd expect?
<babbageclunk> wallyworld: oh yeah - dependencies.tsv's still there
<babbageclunk> aha, the charmrepo one is more like what I want
<wallyworld> good
<_thumper_> wallyworld, tlm: PR 11203 introduced confict markers into the acceptance tests python file
<thumper> it is what is causing many stop the line failures...
<tlm> taking a look
<tlm> i'll put up a pr thumper, sorry about that
<thumper> tlm: thanks
<tlm> https://github.com/juju/juju/pull/11210
<hpidcock> tlm: lgtm
<hpidcock> same lgtm that let it through in the first place sorry
<anastasiamac> babbageclunk: \o/
<vultaire> New MR against the interface:elasticsearch layer, need a review if possible: https://github.com/juju-solutions/interface-elasticsearch/pull/14
<nammn_de> manadart: around for another review round on: https://github.com/juju/juju/pull/11186 I incorporated your feedback. On those things where I wasn't 100% sure I wrote some comments how and why I did it that way
<manadart> nammn_de: Will look in a minute.
<nammn_de> stickupkid: around for a quick talk about the test I mentioned yesterday?
<stickupkid> nammn_de, yeah give us 5
<nammn_de> stickupkid: sure take your time
<stickupkid> nammn_de, daily
<stickupkid> manadart, just grabbing a drink, but once I've done that want to HO about openstacky
<manadart> stickupkid: Yep, got to pick-up/drop-off my daughter just before 12, but ping me.
<stickupkid> what time is it there now?
<stickupkid> 11?
<stickupkid> manadart, ^
<manadart> stickupkid: Yeah, 10 past.
<stickupkid> manadart, ping
<nammn_de> stickupkid: up for a smaller cr? https://github.com/juju/juju/pull/11212 only updates tests, nothing customer facing
<nammn_de> manadart: was planning to look into update-space. Was planning to spec that out again. Time for a HO later?
<manadart> nammn_de: Maybe, looking at Prodstack issue right now.
<nammn_de> manadart: sure, just gimme a ping else I start specing myself and you can take a look at the network cli doc and give a comment
<hml> stickupkid: it looks like what is missingâ¦ the juju-<model-name> profile.
<stickupkid> hml, but why?
<hml> stickupkid: the 64 thousand dollar question.  :-D
<stickupkid> hml, do we set it?
<rick_h> morning
<hml> stickupkid: yes, we should.   it should go on every container
<hml>  stickupkid iâll play with it some today
<hml> stickupkid: waitâ¦ brain fart - they only go on machines that are containersâ¦. not containers.
<hml> lack of coffee brain
<hml> looks to me like the charm profile is being applied.
<stickupkid> hml, that's very strange, we do remove it in certain places
<hml> stickupkid: that profile was never applied to #/lxd/# machines, only #.  Iâd have to look at the profiles to maybe remember why
<hml> itâs from the being of lxd in juju i think
<hml> stickupkid: it allows us to create a container in a container
<hml> the juju-default etc profiles
<hml> not sure weâre ready for a container in a container â¦ 0/lxd/0/lxd/0?
<stickupkid> yeah... errr
<achilleasa> hml: 11206 has been approved
<hml> stickupkid: we would have seen errors if the container couldnât be deployed.
<hml> achilleasa:  ty
<hml> stickupkid: ping me when youâre back.  found something interesting
<rick_h> manadart:  if you get a sec can you update your "changes requested" on https://github.com/juju/juju/pull/11195 please?
<rick_h> the code there in question is gone now anyway
<hml> stickupkid: reproduced locally perhaps
<manadart> rick_h: Approved it.
<rick_h> manadart:  ty
<achilleasa> hml: I got my framework patches to play nicely with the new state tools :-)
<hml> achilleasa:  sweet!
<hml> anyone noticed that subordinate unit numbering maybe not increment by 1, I got units of 1,8,14,15. ; after a deploy and add-unit 3 times.
<hml> stickupkid: reproducing iâm at 50% fail rate.  tracking down the root cause
<IOstars> Hey there everyone, does anyone here know when nova-compute charm will pull release with the current fixes in milestone 20.02?
<rick_h> thedac:  ^ ?
<rick_h> IOstars:  I think the charm release is immenent but defer to them on their changes getting out the door
<thedac> Yes, charm release is immanent but waiting on QA
<IOstars> Getcha, just curious. Have a fresh install that got hit by the critical bug being fixed there.
<IOstars> Grreat thanks :)
<rick_h> IOstars:  :( glad there's a fix. Yea coming soon
<hml> stickupkid: https://pastebin.canonical.com/p/TKdw4mpwkT/ line 2 ; care to take bets on the cache?  :-D
<stickupkid> sigh
<stickupkid> hml, quick ho?
<hml> stickupkid: i think youâre pr from jan will fix.
<hml> stickupkid: sure
<thumper> morning team
<rick_h> morning thumper
<hml> morning thumper
<vultaire> hello - hoping to wake up a MR which has been stuck without review for some time.  interface:elasticsearch, adding "version" field to share MongoDB version.  MR: https://github.com/juju-solutions/interface-elasticsearch/pull/13
<vultaire> If anyone could take a look, would be quite appreciated.  Thanks.
<vultaire> erm
<vultaire> I totally botched my statement there ;)
<vultaire> "version" field shares the elasticsearch version ;)
<vultaire> (I have a different MR re: mongodb as well; mixed up in my brain a bit.)
<timClicks> what's the simplest way to install juju_engine_report? We make oblique reference to it in places, but I can't find a place where we publicise it at all.
<wallyworld> it's automatically installed by jujuitself when it creates a machine
<wallyworld> all those diagnostic scripts are put in place, of which engine report is one
<wallyworld> /etc/profile.d/juju-introspection.sh
<timClicks> wallyworld: could you please check the note here for accuracy? https://discourse.jujucharms.com/t/what-are-your-tips-for-running-juju-in-production/2573/10?u=timclicks
<wallyworld> will do, just finishing something first
<timClicks> np
<wallyworld> also works on k8s, but you need to source the script when you exec in, there's a discourse post on it
<timClicks> and some release notes ;) https://discourse.jujucharms.com/t/new-features-and-changes-in-juju-2-7/2268
<wallyworld> timClicks: were you going to mention the other functions that are available?
<timClicks> wallyworld: eventually.. simon's post prompted me to think about how discoverable juju_engine_report is
<wallyworld> i think we could build the summary into the engine report as a stadard thing
 * timClicks nods
<timClicks> How do you train your staff to use Juju? Some suggested tasks for people learning Juju https://discourse.jujucharms.com/t/-/2639
<timClicks> ^ would love anyone's input on this thread!
<anastasiamac> timClicks: love i
<anastasiamac> timClicks: for advance, i'd recommen adding model migration and CMR stuff too
<timClicks> :D
<timClicks> great idea
<timClicks> anastasiamac: do you want to add a comment there yourself?
<anastasiamac> and mayb at each laevel, how do u troubleshoot problem with each task? like usage of logs, debug-log command etc
<Sp3nc3r> any free mailer
<Sp3nc3r> or smtp
<anastasiamac> timClicks: feels like stepping on ur toes.... but may :)
<timClicks> no no, comments in the thread are welcome - otherwise it's always me talking to myself
<anastasiamac> and that is fun to watch too :D
<timClicks> Sp3nc3r: are you asking if you can deploy a mail server like postfix with juju?
<timClicks> evhan: sorry I couldn't be more helpful in discourse
<evhan> timClicks: Not at all! That background is already helpful.
<timClicks> evhan: there's actually some evidence that the majority of deployed charms are "private charms" because many people use the public charms as their base, then make tweaks
#juju 2020-02-13
<babbageclunk> tlm: looking through the model manifolds I clicked about something that was confusing me - there are lots of workers wrapped in ifNotMigrating that seem like there should only be one across the controller agents.
<babbageclunk> tlm: It turns out they all depend on environTracker/caasBrokerTracker, which *is* wrapped with isResponsible, so they only run once.
<evhan> I have a situation where I've upgraded a charm twice in quick succession, and have ended up with one working unit and one unit stuck in state "unknown", and the application "waiting" with 2/1 units running. Apart from the logs is there some place I can look to see what it's waiting *for*?
<evhan> Or is that entirely defined by the charm (e.g. it went into "waiting" because some hook has not been implemented)?
<tlm> is this k8's ?
<evhan> Yeah. The thing is when I give it a minute to go green between deploys, things work OK.
<evhan> This was just a case of overcaffeination, but now that it's stuck I figure I should investigate.
<tlm> what does the status of the pod in k8s say ?
<evhan> Just Running, looking for interesting events now...
<tlm> hmmm, my theory is that the k8s watcher in juju has not caught up. If you force some event to happen to the pod does that change the status in Juju? For example changing a pod label, restarting it, etc etc
<evhan> Huh, indeed. I just added a dummy env entry to the pod and it came right.
<evhan> Thanks.
<tlm> np, we just re-implemented the watchers for k8s in 2.7.2 (out soon) and I think this will have solved that problem
<wallyworld> 2.7.3
<wallyworld> watchers change won't be in 2.7.2
<tlm> ah yep
 * tlm chants make watchers great again
 * thumper needs moar coffee too
<anastasiamac> babbageclunk: PTAL https://github.com/juju/juju/pull/11215
<anastasiamac> babbageclunk: ends up being mostly mechanical
<babbageclunk> anastasiamac: ok, taking a look
<anastasiamac> babbageclunk: no rush!
<babbageclunk> anastasiamac: ooh, it's big - probably won't get it finished today
<wallyworld> hpidcock: i've left come thoughts on the PR, happy to discuss
<kelvinliu> wallyworld: https://github.com/juju/juju/pull/11211 PR for adding ValidatingWebhookConfigurations support and upgrade k8s API to 1.17 , could u take a look? ty
<wallyworld> sure
<kelvinliu> im going out now, I will respond tmr morning if there are any questions. thank you
<wallyworld> np
<stickupkid> I like how openstack api ref has object fields with colons in it - hmmm
<achilleasa> shouldn't the SetPodSpec call check for leadership on the server-side? It seems that this is only checked at the client
<nammn_de> manadart: thanks for patch and suggestion! I updated the code and corresponding tests
<stickupkid> manadart, here is the code, I just need to speak to hml about how to test this locally https://github.com/go-goose/goose/pull/77
<achilleasa> anyone knows if it is possible to be running an agent binary that is *newer* than the controller? The reverse is possible (agent version gets synced when you juju upgrade-model)
<rick_h> achilleasa:  I don't think so. I think the controller has to be the newest so we know the apis are available
<achilleasa> rick_h: so basically, I am trying to figure out if this check is required or not: https://github.com/juju/juju/blob/develop/api/uniter/unit.go#L786-L788
<rick_h> achilleasa:  heh, sure would be nice if there was a comment on the logic of the assertion...
<nammn_de> manadart: regarding your comment to add the DocID to the existing constraintsDoc.. I had that initially in the PR. That led to problems because existing Update code were not expecting that. I may be able to work it out and rewrite the update code. Just thought that would not be relevant for that patch
<rick_h> achilleasa:  yea, so I know that in Juju upgrades the controller has to be done first and is promised to be later than any of the unit agents at first.
<rick_h> achilleasa:  then when you upgrade the other models they slowly come up to speed
<rick_h> achilleasa:  so I can't think of any way that hits, but someone clearly wrote it for some reason.
<achilleasa> rick_h: that's a pretty old facade version (we are currently at v15)
<rick_h> achilleasa:  yea...and being it's in the uniter it's not suceptible to having an old client
<stickupkid> https://github.com/juju/juju/pull/11093#pullrequestreview-358214590
<stickupkid> hml, tiny nit
<hml> stickupkid: notes have âgo test -live -check.vâ. which now gets a -live not valid flag?  also âgo test -live -check.v -image <id> -flavor <name> -vendor canonistack ./â¦â for nova live tests
<stickupkid> hml, it's a start :)
<hml> stickupkid: hand written notes from fall 2019.  ha!
<nammn_de> manadart: i've fixed and changed the constraints DocID. While working on it a migration_internal_test failed. I added the DocID to let it pass. Does adding the DocID has implication on a migration?
<manadart> nammn_de: What you've done is fine.
<hml> stickupkid: https://github.com/juju/juju/pull/11217
<hml> so is errors.WithStack the new errors.Trace?
<stickupkid> ah, yeah sorry i meant errors.Trace()
<stickupkid> hml
<hml> :-)
<hml> stickupkid: weâve proven that â InstanceMutaterV2 interface â is a waste right?  I could remove now
<stickupkid> i believe so
<stickupkid> hml, holy batman, so many tests testing the test code
<stickupkid> goose needs to learn about mocking
<rick_h> hah, I think it was a bit pre-mocking party time
<hml> hahahahaha
<rick_h> careful, you're going to hurt hml with all the laughing
<stickupkid> actual code change +184, tests that test the test +1000000000
<achilleasa> stickupkid: hmmm... but what verifies that the test runner runs the tests properly instead of cheating? ;-)
<hml> achilleasa:  -live
<stickupkid> achilleasa, turtles all the way down
<achilleasa> I am reading the docs for AddUnitStorage (https://github.com/juju/juju/blob/develop/apiserver/facades/agent/uniter/storage.go#L351-L353). The implementation looks like it does a best effort to provision storage and does not stop after an error. However, the client side logic seems to trigger any failue as an error and will bubble up the error to the hook context flush
<achilleasa> Should the batch call (processes storage reqs for a single unit) bail out with an error as early as possible given that no other change will be applied?
<achilleasa> rick_h: ^
<achilleasa> (s/trigger/treat/)
<hml> stickupkid: still around?
<achilleasa> rick_h: looks like the facade version check was there because the same API call is used by the juju cli: https://github.com/juju/juju/blob/develop/cmd/juju/storage/add.go#L128
<stickupkid> hml, yarp
<stickupkid> speaking to builder
<hml> stickupkid: ho?
<rick_h> achilleasa:  :/ bah that explains it then
<rick_h> achilleasa:  not sure on the storage question. Maybe shoot an email to the list and I can see what wallyworld thinks there and what his experience is
<achilleasa> rick_h: My theory is that this is for the cli (add storage to multiple things)
<achilleasa> for flushing it should be atomic anyway
<achilleasa> (we will keep retrying)
<thumper> morning team
<hml> hi thumper
<rick_h> howdy thumper
<nammn_de> rick_h: I  finished the remove-space cmd and the edge cases. If you find some time can you give some feedback on ux and output? https://github.com/juju/juju/pull/11183
<rick_h> nammn_de:  trying, spending this afternoon trying to get my VPC setup correct.
<rick_h> Error details: subnet "subnet-bb18f687" not associated with VPC "vpc-c6391da1" main route table
<rick_h> :(
<nammn_de> rick_h: no worries,  if that doesn't work I printed the console output to make it look nice :D.
<rick_h> nammn_de:  yea, have your PR up and trying to bootstrap with it but :( on my vpc setup
<evhan> I've just switched my client from stable to edge, and I'm trying to run upgrade-controller to do the same for the agent, but there doesn't seem to be a corresponding version available(?). Is there a way to see available versions for the agent, and to synchronise the two?
<evhan> i.e. `juju upgrade-controller --dry-run --agent-stream=edge` says "no upgrades available".
<evhan> Actually, I might be confusing snap channels with juju... Streams?
<evhan> Yeah, never mind. RTFM, evhan.
<anastasiamac> evhan: this might be worth a discource post ! glad u figured it out so quickly :D
<evhan> Although, setting agent-stream=devel says the same.
<evhan> Ah, I need to specify both an agent-stream *and* an agent-version, since both have a value set in the model-config.
<rick_h> evhan:  yea exactly
<rick_h> the snap/edge is only for the local client and the controllers get their data from streams vs the snap world
<evhan> Yeah, makes sense.
<evhan> I'm still flailing here a bit though: https://paste.ubuntu.com/p/xZ3BX3DqTX/
<rick_h> evhan:  yea that last one is right. It's offering to upload the jujud binary from your client snap to the controller
<rick_h> evhan:  are you looking to try the edge (2.8?) or 2.7.2 (the current candidate stable?)
<evhan> rick_h: edge, having just updated my client version via snap to the same.
<rick_h> evhan:  ok, sec testing
<rick_h> evhan:  though tbh if you care about this controller it's going to get funny in the future as the updates come along
<hml> wallyworld:  are you around to talk actions and k8s charms?
<wallyworld> i am
<hml> wallyworld:  ho?
<evhan> OK, so when I changed the arguments to `juju upgrade-controller --agent-stream=released --agent-version=2.7.2` just to test upgrading full stop (note without --dry-run) it did what I wanted: https://paste.ubuntu.com/p/Scc7j5KfR8/
<wallyworld> join the team one, still talking to kelvin
<evhan> But not what I would have expected given the CLI flags. So I got the result I wanted but I'm still a little confused.
<evhan> rick_h: Yeah, not a controller I care about, just testing.
<rick_h> :/ yea that pastebin seems odd
<evhan> Worth filing an issue? I can try to reproduce it, or just provide what I have there. I don't really have a guess about what's going on, perhaps the --dry-run flag is interfering?
<rick_h> evhan:  I guess. Yes, the dry-run thinks it'll just use the local client as 2.7.2.1 but what it actually did for you is move you to 2.8 even though what you asked for was 2.7.2. :/
<rick_h> in the end it's a little bit like "don't do that, why would anyway go from a stable juju controller to a local pre-beta thingy" but here we are :)
<evhan> Yeah, fair. The 2.8 version was uploaded successfully when I look on the controller, although nothing was restarted so it's still running the original 2.7.1 version.
<anastasiamac> babbageclunk: m trying to run juju-restore and can't...
<babbageclunk> anastasiamac: what's happening?
<anastasiamac> babbageclunk: m on my primary and runnig the binary but providing just pwd is not enough
<anastasiamac> babbageclunk: m getting auth failed error
<anastasiamac> babbageclunk: it's me ofc.. i was using statepwd instrad of the actual oldpwd... :)
<babbageclunk> anastasiamac: which password are you passing
<anastasiamac> all good
<babbageclunk> yeah, that one got me as well!
<anastasiamac> \o/
<babbageclunk> I'll add that to the readme - it's not at all clear which password would be the right one from looking at them
<anastasiamac> nice
#juju 2020-02-14
 * thumper is off to physio in 1.5 hours to see if he has broken his hand
<babbageclunk> :(
<anastasiamac> babbageclunk: my thoughts so far... https://github.com/juju/juju-restore/pull/3
<babbageclunk> ok, will look at that after kelvinliu's one
<babbageclunk> (after your other one)
<anastasiamac> babbageclunk: no worries \o/ m not in a rush...
<anastasiamac> babbageclunk: also cannot seem to link juju-reestore repo in trello so will just attach the link instead of using github button
<wallyworld> kelvinliu: can we have a unit test for the PR? i guess you're waiting for a +1 from field?
<kelvinliu> wallyworld: yeah. and babbageclunk is reviewing
<babbageclunk> anastasiamac: maybe there's a trello user that needs access to the repo?
<anastasiamac> babbageclunk: dunno but attaching suited me for now :)
<babbageclunk> fine :)
<hpidcock> wallyworld: I've changed the remotestate/resolver in the uniter for passing through changes in state. https://github.com/juju/juju/pull/11214/ I haven't finished the unit tests, but just a preliminary change to see if you like it.
<wallyworld> ok, ta
<wallyworld> hpidcock: yeah, i think that's ok
<wallyworld> i left a couple of questions
<babbageclunk> anastasiamac: approved the backup format version PR
<anastasiamac> babbageclunk: \o/
<babbageclunk> kelvinliu: approved the vsphere change too
<kelvinliu> babbageclunk: ty
<anastasiamac> babbageclunk: "plugin" template name copied from elsewhere (I think maybe the juju command)..? want me to give it a diff name?
<anastasiamac> babbageclunk: addressed everything else
<babbageclunk> Just confused me - what about "template"? Or just ""? Not sure.
<anastasiamac> i like "template" :)
<babbageclunk> I guess it would be in the error if there's a syntactic problem with the template, but that's not a problem here since it's hardcoded.
<anastasiamac> yes :D
<babbageclunk> ooh, apparently the name becomes important if you want to have one template that invokes another
<anastasiamac> makes sense
<kelvinliu> wallyworld_: got this pr to add hostNetwork https://github.com/juju/juju/pull/11219  +1 plz
<wallyworld_> kelvinliu: just did it :-)
<wallyworld_> ty
<kelvinliu> quick! ty!
<wallyworld_> hpidcock: i'll try and land this on monday if you can review at some stage whenevr https://github.com/juju/juju/pull/11220
<hpidcock> wallyworld_: I'll check it out
<nammn_de> anyone with access to spaces want to give review/qa for https://github.com/juju/juju/pull/11183 only contains the patch for the command interface. rick_h wanted to give some feedback on ux, if his vpc is working
<achilleasa> nammn_de: is there any easy way to remove the duplicate "cannot remove space" from the error messages? They look a bit odd; also (minor nitpick) if the reasons are a list maybe we could prefix them with a '-' or something
<nammn_de> Will update the messages accordingly. Makes sense. Will try and update the qa section
<nammn_de> achilleasa: like this? https://paste.ubuntu.com/p/s8kFWfw9MS/
<nammn_de> hmm I think I can omit the because though
<nammn_de> yeah, I will omit the ... because:\n
<achilleasa> nammn_de: much nicer! Is it also possible to pretty print the constraint list?
<nammn_de> achilleasa: you mean  instead of this [{"model-e"} {"application-mediawiki"}] just this "model-e","application-mediawiki"
<achilleasa> nammn_de: or maybe like this: https://paste.ubuntu.com/p/j29kcdpmWZ/
<achilleasa> I don't remember if we usually quote these or not. Can you check what the other commands do?
<nammn_de> achilleasa: okay, great feedback. Will try to make it look more sexy
<achilleasa> can anyone recommend a lightweight charm that uses storage?
<achilleasa> nvm; swift-storage works for my tests
<manadart> stickupkid: What was the issue with old EC2 accounts/VPCs again?
<manadart> stickupkid: Oh, the flavours are not all available, right?
<stickupkid> manadart, if you don't have one setup (default VPC) - you don't have new instance types
<stickupkid> manadart, like everything is deprecated, but I believe you can use force an instance type, but don't quote me on that
<nammn_de> achilleasa: now it looks pretty
<nammn_de> similiar to your showed output
<achilleasa> nammn_de: nice!
<rick_h> nammn_de:  and what is "model-e" there? We don't tend to show the raw tag like that. Should that just be "Found an existing model constraint"?
<nammn_de> rick_h: you are right, I should change that. Will change that for the other tags as well.  Like instead of application-mediawiki -> mediawiki
<hml> stickupkid: what do you think?  are we ready to move 11093 from WIP status?  After updating the form part?
<rick_h> nammn_de:  definitely, please look around as to how we would show stuff like that in other places. I think the error messages in the binding calls are done pretty well as good examples
<nammn_de> rick_h: Noted. Will do thanks!
<hml> stickupkid: should have read my email more closely :-)
<nammn_de> rick_h: does this look better? https://paste.ubuntu.com/p/MM4dTt2dkf/
<rick_h> nammn_de:  what does `juju get-constraints mediawiki` show in your example?
<nammn_de> rick_h: i accidentally killed my controller. Need to start it up and create env again
<rick_h> nammn_de:  ah sorry
<nammn_de> rick_h: if helps we can quick before daily? Else Im gonna recreate and look later.
<nammn_de> mediawiki is shown as constraint because one can deploy with: juju deploy mediawiki --constraints spaces=bla
<nammn_de> and a constraints entry in the collection is created with the id to that application
<nammn_de> rick_h: https://pastebin.canonical.com/p/PyVfwtY84f/
<manadart> stickupkid: https://github.com/juju/juju/pull/11221
<rick_h> nammn_de:  but what does "juju get-constraints" show?
<nammn_de> https://paste.ubuntu.com/p/bbnwzYS4ZD/
<nammn_de> rick_h ^
<rick_h> y
<rick_h> ty
<rick_h> nammn_de:  manadart stickupkid so I'm really thinking we should ignore those for the case of applications. We'll always end up with double output on the constraint and the binding.
<rick_h> what do you think?
<nammn_de> rick_h: afaict I could deploy a application with a constraint but without a binding -> which does not lead to a double output
<rick_h> nammn_de:  hmm, ok. Can you test/verify you can deploy with a space constraint but no bindings please?
<nammn_de> that would be my pastebin https://paste.ubuntu.com/p/CnBGZ2C5mX/
<nammn_de> rick_h: yeah above pastebin
<rick_h> hah ok you beat me to it
<rick_h> ok, that's cool then
<rick_h> nammn_de:  so one tweak to the wording on that please. As it reads now it's saying "following existing constraints" but the data afterwards isn't a constraint
<rick_h> nammn_de:  so I think it should read more direct like: "db" is used as a constraint on: x
<rick_h> nammn_de:  and in the model case it would be: "db" is used as a constraint on: mymodel, mediawiki
<rick_h> how does that read?
<stickupkid> â¯ juju remove-space db
<stickupkid> ERROR cannot remove space "db":
<stickupkid> - Found the following existing constraints: mediawiki
<stickupkid> This is really painful in integration tests
<stickupkid> :(
<achilleasa> stickupkid: nammn_de maybe we should add a --force?
<stickupkid> I wonder if there is a way to do it `--cascade`
<rick_h> achilleasa:  nammn_de we should definitely have a --force
<stickupkid> I think using force for this isn't quite right imo
<rick_h> stickupkid:  how so? My concern are things like stuck units, machines that didn't delete, etc
<rick_h> stickupkid:  basically the machines go unchanged you've just force moved the subnets from the one space name back to alpha
<stickupkid> if you remove-space db, what happens to mediawiki (in that scenario)
<stickupkid> fine, that makes sense
<rick_h> stickupkid:  so the subnet moves back to the alpha space
<stickupkid> --force is good
 * stickupkid is happy again
<rick_h> stickupkid:  and now mediawiki is still the same IP address-wise, but the endpoint binding it to alpha
<rick_h> oh ok, /me likes happy stickupkid
<stickupkid> don't want to have that situation around relations, that sucks
 * rick_h cannot process that sentence atm
 * rick_h sips more coffee...
<stickupkid> i.e. if you have a relation, you can't remove an application
<rick_h> oh a CMR one?
<stickupkid> or destroy a controller, hence why we brute force it in the integration tests
<stickupkid> yes
<rick_h> if you have a relation you can remove the app, but yes the CMR one is the complicated one that hangs up folks
<rick_h> gotcha, right
<rick_h> definitely don't want more of that
<stickupkid> 100% agree
<nammn_de> will add force. Do we want force to be able to run in any case (controller setting, application binding, constraints)?
<hml> stickupkid: https://bugs.launchpad.net/juju/+bug/1863253
<mup> Bug #1863253: model cache: publish modelUnitAdd too often with subordinates <juju:Triaged> <https://launchpad.net/bugs/1863253>
<rick_h> nammn_de:  yes
<stickupkid> hml, wicked
<hml> stickupkid: anything missing from the description: https://github.com/juju/juju/pull/11093?
<stickupkid> hml, nope, all good from me
<hml> stickupkid: iâll hit the merge button
<stickupkid> YAY
<stickupkid> hml, me right now https://i.imgur.com/5irRQLe.gif
<hml> :-D
<nammn_de> stickupkid: for better integration test compability, should I remove the "-"?
<stickupkid> nammn_de, from ?
<nammn_de> stickupkid: from the remove-space output you posted above from my example
<stickupkid> nammn_de, nah, that's fine imo
<hml> achilleasa:  available to chat on  "storing uniter state server-side" ?
<achilleasa> hml: sure. give me 2min
<nammn_de> rick_h: does this output look better? https://paste.ubuntu.com/p/B5yTVs76TV/
<stickupkid> hml, can you look at my PR for goose, https://github.com/go-goose/goose/pull/77
<stickupkid> hml, I think that's everything
<hml> stickupkid: sure
<rick_h> nammn_de:  looking good, what's the controller settings? Since spaces don't exist until the controller comes up so not sure what that is. I'd also assume it's `controller-config` vs "settings" since that's what the commands are around that
<nammn_de> rick_h:  ah, yes controller-config  its: juju-ha-space and juju-mgmt-space in case you are running on a controller-model
<rick_h> nammn_de:  k, yea let's roll with calling it the same thing throughout (-config) but cool
<nammn_de> rick_h: rgr thanks
<nammn_de> will update with the force option, update qa and link again then
<rick_h> achilleasa:  do you have time to help me with my aws vpc setup please?
<rick_h> achilleasa:  or not if you're close to school pickup/etc
<achilleasa> rick_h: sure. I am in daily
<rick_h>  achilleasa omw
<nammn_de> rick_h: I am trying to fiddle around a good way to use force. Here are the options I see.
<nammn_de> Each option makes the implementation quite different.
<nammn_de> https://paste.ubuntu.com/p/TMGByz6BJk/
<nammn_de> For the first option i need to use some-kind of dry-run/just run without force and prompt again if error
<rick_h> nammn_de:  sec, otp will peek in a minute
<hml> stickupkid: did you get the go test -live working for your goose changs?
<stickupkid> hml, not via the python
<hml> stickupkid: no cli?
<stickupkid> hml, i just switched the default value
<stickupkid> hml, https://github.com/go-goose/goose/blob/v2/neutron/neutron_test.go#L12
<hml> stickupkid: how are these ports going to be used?  after being created?
<stickupkid> hml, ho?
<hml> stickupkid: sure
<rick_h> nammn_de:  ok, the first one is good but we have to remove the Y/n so that it's scriptable
<rick_h> if you go force that's it, you're on your own
<nammn_de> rick_h: the  Y/n is used in other code as well with the option to provide a -y .
<nammn_de> so I planned to the same
<rick_h> nammn_de:  ah ok, then carry on
<nammn_de> rick_h: time for a ho?
<rick_h> nammn_de:  sure thing, meet in our 1-1 room?
<nammn_de> rick_h: coming
<achilleasa> rick_h: did the bootstrap work?
<rick_h> achilleasa:  no, it failed with a 20m timeout
<achilleasa> :-( did you retry with --debug?
<rick_h> achilleasa:  not yet, was otp
<rick_h> and have to run to family lunch. I'll mess with it achilleasa ty for the help!
 * rick_h goes *poof*
<achilleasa> good luck!
<achilleasa> stickupkid: can you help me track down something?
<stickupkid> achilleasa, maybe
<achilleasa> daily?
<stickupkid> of course
#juju 2020-02-16
<kupernetes> hello
<kupernetes> anyone familiar with this error?
<kupernetes> WARNING no API addressesERROR cmd: error out silently
<kupernetes> for juju clouds
<thumper> morning team
<babbageclunk> thumper: release question - is disco still supported?
<thumper> babbageclunk: probably not so much any more
<babbageclunk> looks like it ended in Jan. Ok, sounds like I need to update build_package.py
<babbageclunk> ah no, just hadn't fetched latest trunk
<babbageclunk> kicked off the release job
<thumper> timClicks: got a few minutes?
<timClicks> thumper: yes, if you're still free
