#juju 2012-01-30
<niemeyer> Hello!
<niemeyer> How's everybody on that fine Monday morning?
<hspencer> mornin niemeyer
<hspencer> barely wakin up
<hspencer> lol
<niemeyer> hspencer: Hehe, that's how Mondays are
<niemeyer> :)
<hspencer> lol
<hspencer> yea
 * nijaba welcomes dxd828
<koolhead11> hia ll
<nijaba> Hello koolhead11
<koolhead11> hey nijaba . how are you?
<nijaba> koolhead11: doing good, thanks.  you?
<koolhead11> am good too. time to update owncloud charm from 2 to version 3
<koolhead11> reading and testing it
<nijaba> nice
<heylukecarrier> Hi, I'm trying to set up Juju with the local environment type and am getting errors from libvirt during the bootstrap phase (virsh net-start default needs to be run as root, but Juju's bootstrap command doesn't run it so). Should I run juju bootstrap as root? I'm guessing this would cause permissions issues?
<fwereade_> heylukecarrier, that's funny, I thought it would ask you for your password at the point it actually needed root access, would you pastebin a transcript please?
<heylukecarrier> fwereade_, here's a paste of the bootstrap run: http://pastebin.com/NvXZvTyZ
<heylukecarrier> when I ran it as root (with sudo), it complained about missing keys, but worked perfectly after destroying the environment, generating the key and running the command again
<fwereade_> heylukecarrier, this suggests that you may need to reboot, after installing the dependencies, before it will work
<fwereade_> http://askubuntu.com/questions/65359/how-do-i-configure-juju-for-local-usage
<fwereade_> heylukecarrier, have you tried that?
<heylukecarrier> fwereade_, I've just followed those instructions and have the same problem at the same point. When I ran juju bootstrap as root, I could carry on
<koolhead11> hazmat: around?
<koolhead11> heylukecarrier: pastebin your enviornment.yaml
<koolhead11> are you creating the yaml inside root user by any chance :P
<heylukecarrier> fwereade_, http://pastebin.com/hpjw0FtQ
<heylukecarrier> koolhead11, I'm not entirely insane :3
<heylukecarrier> since I've only just installed virsh, would a reboot be in order?
<koolhead11> heylukecarrier: :D
<fwereade_> heylukecarrier, sorry, that was what I suggested, I think I was unclear
<fwereade_> heylukecarrier, apparently you sometimes need it after installing the dependencies
<heylukecarrier> Okay, rebooted, trying again now
<heylukecarrier> Wow, so simple, it works fine. I think this should be emphasised in the documentation somewhere, though :)
<fwereade_> heylukecarrier, good point, thanks :)
<heylukecarrier> Another stupid question; is there any way to increase the verbosity of the debug log?
<_mup_> Bug #923746 was filed: local provider docs should mention needing to reboot <juju:New> < https://launchpad.net/bugs/923746 >
<hazmat> g'morning
<fwereade_> heylukecarrier, --level DEBUG may help
<hazmat> heylukecarrier, your host is oneiric and your using the oneiric juju package?
<fwereade_> heylukecarrier, no wait, that's the default already
<fwereade_> heylukecarrier, I think that's as verbose as it gets -- what do you need more visibility into?
<hazmat> heylukecarrier, juju will ask for sudo as needed
<hazmat> heylukecarrier, what's the locale on your system?
<heylukecarrier> hazmat, correct. There are long periods of time where there is absolutely no output, and even using the --rewind switch doesn't reveal anything. I'm just interested in knowing what it's doing
<fwereade_> hazmat, morning :)
<heylukecarrier> hazmat, it's en_GB.UTF-8. There is some output, just very little for debug mode
<hazmat> heylukecarrier, after bootstrap, during the first deploy its creating an lxc container via lxc's lxc-ubuntu template which runs deboostrap on a directory to setup a minimal environment in a directory, juju then uses lxc-clone to copy that container over for other containers. the initial debootstrap can take a few minutes
<hazmat> during bootstrap on local, juju starts a zookeeper instance (running as regular user), starts a machine agent (running as root, this creates the lxc contianers), ensures that the default libvirt network is up and running, and setups a web server (on the libvirt network) for distributing files to the containers
<heylukecarrier> hazmat, the bootstrap was failing initially because virsh can't configure the network stack without a reboot after the initial install
<hazmat> fwereade_, g'morning :-)
<heylukecarrier> once I rebooted, lxc finished successfully and the install worked. Thanks for the explanation though; I guess lxc was running
<jcastro> SpamapS: I saw a bunch of charm- bug activity after I EODed on Friday, anything new get promulgated?
<koolhead11> hazmat: in case you will have time today, i can get the doc updated :)
<koolhead11> hola jcastro
<hazmat> heylukecarrier, you have to login/logout for the libvirt install to fully take effect, it wants the user to be a member of the libvirt group
<fwereade_> hazmat, oh, that's the reason? good to know
<fwereade_> hazmat, could we just `newgrp libvirtd` for the same effect?
<koolhead11> heylukecarrier: After installing it, you may have to reboot (I had to else libvirt couldn't create the network bridge indicating it was already in use). i think that askubuntu page had the info
<hazmat> fwereade_, that looks promising
<fwereade_> koolhead11, it's really very non-obvious there ;)
 * koolhead11 is confused.
<koolhead11> monday blues
<heylukecarrier> fwereade_, koolhead11: thanks both; very helpful!
<fwereade_> heylukecarrier, a pleasure :)
<koolhead11> heylukecarrier: welcome!! :)
<gary_poster> Hi.  Can two juju instances in the same cluster talk to one another over a given port without exposing the port?  I would have expected so, from past-but-maybe-forgotten-non-juju ec2 experience, but at least with a lxc-based juju cluster it seems like we might have to expose the port.  Can anyone verify expectations in this regard?
 * gary_poster will go look for docs about that; I think others have, but I haven't tried yet.
<gary_poster> OK, seems like the port is actually connecting fine.  go telnet.  some other problem
<SpamapS> gary_poster: expose only controls access in and out of the cloud provider.
<SpamapS> gary_poster: also LXC has no ability to control the firewall as of yet, so expose/unexpose are noops on the local provider.
<gary_poster> SpamapS, expose: thanks, right, that's what I figured/hoped.  We should expect no firewalls within the provider itself, was my basic assertion/question-in-the-form-of-a-statement.
<gary_poster> lxc expose noop: heh, ok
<SpamapS> jcastro: re bug activity on Friday.. no promulgation yet.. but very close. :)
<jcastro> SpamapS: good, just wanted to make sure I didn't miss a new person to ship swag to!
<_mup_> Bug #923821 was filed: juju installs lxc/libvirt on all machines <juju:New> < https://launchpad.net/bugs/923821 >
<grapz> Hi. Is it possible to to overrule the default instance type when launching a new unit?
<m_3> grapz: yes, but it's an all-or-nothing setting for an environment... add an entry to the environment in ~/.juju/environments.yaml that looks like 'default-instance-type: m1.large'
<m_3> grapz: there's also 'default-image-id: ami-5b94ca1e'
<m_3> grapz: there's a feature request for per-unit overrides
<grapz> m_3, ahh, just what I was about to reply with, that there should be a per-unit override :)
<smoser> fwereade_, could you please open a bug against libvirt-bin or lxc for bug 923746
<_mup_> Bug #923746: local provider docs should mention needing to reboot <juju:Fix Released by fwereade> < https://launchpad.net/bugs/923746 >
<m_3> really it's going to look like constraints per service unit iirc
<smoser>   can fail with "error: internal...." is not really acceptable
<m_3> "try to provide >=2cpus, 1G <= RAM <= 15G, etc
<grapz> m_3, that sounds great
<arosales> jcastro: the Juju events page is nice, thanks for setting that up.
<jcastro> arosales: heh, now to find someone in europe!
<fwereade_> smoser, sure, I think it's actually just a matter of `newgrp libvirtd` once it's installed
<SpamapS> fwereade_: I just realized you sent me a branch to test out last week, and I haven't even looked at it yet. Is that still relevant?
<fwereade_> SpamapS, it should be: I think hazmat has been poking at the unit-relation-status-not-showing business (hm I should have a word about that)
<fwereade_> SpamapS, but yes: giving it a kick around would be very much appreciated
<koolhead17> hi all
<hazmat> fwereade_, i never reproduced that
<hazmat> fwereade_, on local provider, killing unit agents at will, i did trigger some issues killing off a machine agent
<hazmat> but unrelated to status problems with units
<fwereade_> hazmat, hmm, funny -- sorry, I noticed you saying that just as I was falling asleep and forgot about it
<fwereade_> hazmat, what went wrong with the machine agent?
<hazmat> fwereade_, an error around a node existant which the code expected to create, i didn't digg much further into it, its easy to reproduce though
<fwereade_> hazmat, so once you bounced a unit agent with relations up subsequent statuses would still have (say) relations: db: up, rather than relations: {} ?
<hazmat> fwereade_, yup
<fwereade_> hazmat, interesting, I'll try to take a quick look at the machine agent tonight (contrary to appearances, I'm not really on atm)
<fwereade_> hazmat, but, hmm, I'm confused by the status stuff
<hazmat> fwereade_, was there a particular sequence to reproduce, and i assume you have juju-origin pointing at your branch?
<fwereade_> hazmat, yeah, I had juju-origin pointing at my branch and I never needed to do anything more than kill a unit agent
<fwereade_> hazmat, I'll try again and note exactly what I'm doing
<fwereade_> hazmat, btw, I'm getting a tiny bit fretful about fix-charm-upgrade... is there something horribly wrong with it? ;)
<hazmat> fwereade_, i'm a bit concerned about the complexity of whats introducing
<hazmat> fwereade_, do you have a moment to chat about it?
<hazmat> fwereade_, if we assume for the moment a WAL transition log, what about it is it re-entrant in the previous cut of this branch, the additional states and transitions are contributing what value?
<hazmat> er.. what about it isn't re-entrant
<hazmat> right now its a separation of upgrade into three distinct step, while managing cross-step coordination of the executor. if we take download, extract, and execute as a single transition, with accompanying error transition that effectively retries the operation, what do we lack that needs the additional states and transitions, the WAL transition log, would  encapsulate the marker on the started state workflow variables marking the transition..
<SpamapS> WAL seems like overkill, shouldn't there just be twoo things.. intended_charm_version and agent_reported_charm_version .. the first is updated by admins, the second by agents after they complete an upgrade.
<hazmat> SpamapS, if something dies mid transition, we have to record some state so that on recovery we proceed with the in-flight operation
<SpamapS> "recovery" ?
<SpamapS> just do it all again
<SpamapS> call stop, extract new charm, call upgrade-charm, call start
<hazmat> SpamapS, exactly, but we need to know what we should do again in some cases, though your right it should be apparent
<hazmat> if we clear charm-upgrade post success
<SpamapS> trying to pinpoint where in flight to start seems like folly
<SpamapS> oddly enough, dpkg handles this quite nicely.. and we should just copy it
<SpamapS> but a single upgrade-charm hook won't suffice if we copy dpkg
<hazmat> the wal is generally helpful across any transition (the action boundry is transient)
<SpamapS> has to be prerm, postrm , preinst, and postinst :P
<hazmat> SpamapS, its even simpler than that, we just need to record transitions on disk, else its transient memory
<SpamapS> I really dislike using disk for this
<SpamapS> zookeeper is the system of truth
<hazmat> fair enough
<SpamapS> The thing I've always liked about juju is that the agents just try and make zookeeper's intentions true, or report an error
<SpamapS> anyway I'm not deep enough in this problem to have a strong opinion
<SpamapS> just asking pointed questions.. I'm sure you guys have a handle on it. :)
 * SpamapS goes outside to experience a rare day of fresh air in LA for lunchtime. :)
<hazmat> its more that zookeeper currently encodes users and environment intentions but doesn't capture all of the unit agent's state
<hazmat> SpamapS, good questions, enjoy ;-)
<hazmat> SpamapS, i hear they got oxygen bars out there, air o'clock ;-)
<grapz> how is it with system requirements for the zookeeper node? should it be ok to run it on a t1.micro if you only have a couple of machines?
<lifeless> sure
<lifeless> zookeeper doesn't need all that much
<grapz> great
<SpamapS> We should probably try and do some tests to see at what point ZK exhausts all the memory on each of the instance types.. I would bet that its 1000's of units/relations/etc.
 * SpamapS rings the bell, BONG! BONG! BONG BONG BONG BONG! promulgation!
<SpamapS> oops-tools has been promulgated. :)
<SpamapS> hazmat: how long does the charm browser take to notice that?
<lifeless> SpamapS: with an amqp relation ?
<SpamapS> lifeless: not yet, just pgsql
<lifeless> how do you inject an external pgsql server
<SpamapS> lifeless: you don't. subordinate charms will be how thats done..
 * SpamapS says that about everything
<lifeless> SpamapS: uhm, really?
<lifeless> SpamapS: there's no way to say 'I have a machine twice as large as any instance and it runs pgsql for my cloud'
<SpamapS> lifeless: yeah, you'd have a little charm that just contains the configs for your external pgsql server and feeds them in
<SpamapS> lifeless: right now to do that you'd eat up a machine just to run the hooks to say that
<SpamapS> lifeless: subordinates are the main "must have" for 12.04, and bcsaller is hard at work implementing it.. I think. :)
<bcsaller> yes :)
 * SpamapS ^5's bcsaller
<SpamapS> bcsaller: stop reading IRC and get back to work
<bcsaller> ha
 * SpamapS lashes bcsaller with the whip
<SpamapS> ;-)
 * SpamapS lashes and ^5's with love, of course
<hazmat> SpamapS, 5m
 * m_3 wants to bash bash
<m_3> grrrrr
<hazmat> lifeless, the resource constraint stuff is aimed to allow you to target service deployment to appropriate resources
<SpamapS> m_3: 'I've had bigger chunks 'o korn in mah csh' -- Fat Bastard to Dash
<hazmat> lifeless, if your curious this is the spec.. https://code.launchpad.net/~fwereade/juju/placement-spec/+merge/84443
<gary_poster> hi.  If you have relatively lightweight data that you care about persistently (like a database or perhaps just history/log files) what is the intended juju story?  Are we supposed to use a hook to go stash this information somewhere, or is this considered to be "not juju's problem"? (we should configure backups separately if we have any need like that)
<SpamapS> gary_poster: right now juju does nothing except it will require you to explicitly terminate a machine so you don't lose your data.
<gary_poster> SpamapS, so the intent is that you write a terminate hook to stash what you want?
<SpamapS> gary_poster: no, the intent is you backup your data some how.. we haven't defined how that happens.
<gary_poster> SpamapS, got it, thanks
<SpamapS> gary_poster: subordinate charms will allow some interesting innovation there.. you could make 'bacula-agent' a subordinate charm to a service, and then relate it to the 'bacula-director' service which would be able to do regular backups.
<gary_poster> hm, yeah.  Being able to know "we're about to go down so you better do a last minute stash" might still be valuable in that scenario
<lifeless> hazmat: well the thing is inside canonical DC we have existing things that are really not cloud ready
<lifeless> hazmat: 500GB databases, not suitable for S3 or even EBS style deploys
<lifeless> hazmat: so gluing those into juju as external endpoints would allow us to start using juju early, while the rest of the bits come up to what we need
<_mup_> juju/refactor-machine-agent r449 committed by jim.baker@canonical.com
<_mup_> Initial commit
<hazmat> lifeless, fair enough, at the cost of an additional instance, you could wire up the external service with a charm... else your just talking about hard-wiring in the external service into say the app charm.
<hazmat> we don't have a better story for modeling external system services integration atm
<hazmat> m_3, is that jenkins charm test using ec2 nodes with local?
<hazmat> or are they serially executing on the jenkins server?
<m_3> hazmat: ec2 w/local
<SpamapS> lifeless: to add to what hazmat is saying.. for stuff that is not cloud-ready .. you just write a subordinate charm like 'prod-database-client' that does what the regular postgresql charm would do, except on a remote system instead of locally.
<SpamapS> lifeless: when subordinate charms are "deployed" they don't actually go anywhere until they are related to a non-subordinate charm, then the two get deployed together.
<lifeless> SpamapS: hazmat: thanks
<grapz> When I create new EC2 environments now, I just keep getting 'Invalid SSH key' when doing any operations against it. It worked perfectly earlier today. Can't really find what I've might have screwed up.
<SpamapS> grapz: can you pastebin the exact error?
<grapz> SpamapS, http://pastebin.com/FzMRd9GR
<SpamapS> grapz: btw, if you use 'pastebinit' it will use paste.ubuntu.com and we won't have to be inundated with ads. ;)
<grapz> oh, didn't know that, thanks for the tip :)
<SpamapS> grapz: can you ssh to that machine directly with just ssh -l ubuntu ec2-79-125-72-14.eu-west-1.compute.amazonaws.com  ?
<grapz> SpamapS, no
<grapz> Permission denied (publikkey)
<SpamapS> grapz: ok, your key didn't get onto your new instances then. Perhaps they have errors on their console?
<SpamapS> grapz: euca-get-console-output would show that
<hazmat> grapz, are you running on the same machine/account that was bootstrap from.. if not then the ssh private key used to connect to the environment may not be available to ssh.. juju will by default used one found in ~/.ssh when executing the bootstrap command
<grapz> hazmat, yes, I'm on the same user/machine, so that shouldn't be the issue
<grapz> SpamapS, I see some stuff about saving keys to /etc/ssh and then it lists the SSH fingerprints
<SpamapS> grapz: thats the host keys, different keys
<grapz> ok
<grapz> i'll continue looking at this tomorrow - need to get some sleep now
<SpamapS> grapz: cheers!
#juju 2012-01-31
<koolhead17> hi all
<koolhead17> SpamapS: around
<SpamapS> koolhead17: actually yes
<koolhead17> SpamapS: my talk on juju got selected :)
<koolhead17> will share slide in few days :)
<SpamapS> koolhead17: congrats!
<niemeyer> Good morning all
<jorge> hazmat: hi man! all fine?
<jorge> hazmat: recently i've talked to niemeyer about a problem when running "juju status". i always get a "bind error". i've tested a lot of things with sniffers and verified logs along the command call. i've detected that juju try to ssh to an instance and suddenly the ssh connection is closed! so, googling i've found a irc log about the same problem. in this case there was a discussion about ipv6.
<jorge> hazmat: so i've ran with "strace -e trace=listen,bind,connect -f juju status" and saw that after try to bind to an ipv6, the process dies (http://pastebin.com/JmCnbZbY). In auth.log, at the instance, i can see a suddenly "ssh connection closed". So, it appears that this two things are correlated.
<jorge> hazmat: well, i had a sysctl.conf entry disabling ipv6 (net.ipv6.conf.all.disable_ipv6=1). So, removing that, all is working fine!!! OMG! There were some days trying to understand this problem. Now, just a opinion. I think that somewhere in the code, when some 'if tests' analyses the return of the calls, when the ipv6 returns an error, it understand that all is with error.
<jorge> hazmat: So, even though ipv4 connection is OK, the process dies! Just a bet hehe.
<bac> hi, i'm seeing the problem reported by gary_poster and me last week again.  in the install hook 'config-get' is not returning anything despite having defaults set in config.yaml.  the same charm works for others.  i'm using the PPA version 0.5+bzr451-1juju2~precise1.  any ideas on how to debug it?  this problem is blocking me.
<gary_poster> (bac, you verified that environments.yaml says "juju-origin: ppa" also right?
<gary_poster> )
<bac> gary_poster: yes
<gary_poster> cool
<bac> gary_poster: were you able to resolve the problem?  any debugging hints?
<gary_poster> I have debugging hints on our yellow page
<gary_poster> bac, looking in the data directory and looking at the ppa environment variables were the things that Kapil had me do
<hazmat> jorge, interesting, and odd, juju defers to the ssh binary for setting up the tunnel, and then we always hand it an ipv4 address, something sounds odd there.. i guess its the ssh failing when trying to establish the listening/forwarding port
<hazmat>  hmm.. we could specify the host ip address for the forward port to be more explicit as 127.0.0.1, it sounds like just the port spec alone is causing a bind to additional interfaces
<hazmat> bac which charm?
<bac> hazmat: the buildbot charm we're working on
<hazmat> bac keep in mind the environment will cache the charm, if the version hasn't been incremented.. so subsequent deploys without incrementing the version or destroying the environment will use the cached version
<bac> hazmat: gary_poster pointed me to the master-customize log file.  in that is see the add-apt-repository failed due to name lookup, so it didn't install juju from the PPA after all
<bac> hazmat: thanks but versioning is not the problem
<hazmat> bac what does juju status report the state of the unit as ?
<bac> hazmat: it is install_error b/c 'config-get' is not returning anything, so the install charm fails
<hazmat> if one exists at all, then it should have installed juju into the container, there aren't any fallbacks to juju code installation
<bac> hazmat: but i think that is related to the version mismatch caused by it not being able to add the PPA repo
<bac> hazmat: sorry, i don't understand your last comment.
<bac> hazmat: juju was installed in the container, but it was not the PPA version it was from universe
<bac> so i wonder if the version mismatch is what is causing the problem with config-get.
<hazmat> bac, that would make sense i think, and would explain the mismatch, can you ssh or chroot into the container and verify the package version?
<bac> hazmat: i could and saw that it was not the ppa version (r338 or similar)
<bac> hazmat: i manually added the PPA repo and was able to install the proper one.
<bac> but after doing that, 'juju ssh' began prompting me for the ubuntu user password.  any idea what that is?
<hazmat> bac it should be setup with ssh keys
<bac> hazmat: here is the master-customize.log that shows the errors i mention:  http://pastebin.ubuntu.com/823887/
<bac> at line 255 is the apt-add-repository failure
<bac> and line 661 looks like an error too in that the key isn't being added to authorized_keys.  that would explain why i cannot ssh.
<hazmat> bac, indeed that looks like both the problems
<hazmat> bac to reset it you have to destroy-environment and recreate it
<bac> hazmat: have done so.  this log file is from a clean deploy into a new environment
<hazmat> ugh.. it looks like its not able to setup the resolv.conf correctly at the top of the log
<bac> yeah
<hazmat> i noticed this problem in the precise containers, i've got a branch in the review queue which requires the resolvconf package install explicitly
<hazmat> bac, if you want to try it out as an alternative, you can set juju-origin: lp:~hazmat/juju/local-respect-series    and do a destroy-environment/bootstrap
<gary_poster> there was a resolv.conf change in precise recently.  it was not required, and lxc depended on this.  When it was added, it overwrote the custom /etc/resolv.conf that lxc installed (with nameserver 10.0.3.1). It bit me in another way.  I'm also having lxc issues (lxc-start is hanging, at least on lucid); trying to gather data for Serge before I bother him.
<gary_poster> it was not required -> resolvconf was not required
<gary_poster> when it was added -> when resolvconf was added
<gary_poster> Lordy, my pronoun use was horrible :-/
<gary_poster> I blame it on baby-caused lack of sleep :-P
<hazmat> gary_poster, consistency makes it work :-)
<gary_poster> lol
<jcastro> m_3: FYI remember you have a charm session tomorrow for developer week: https://wiki.ubuntu.com/UbuntuDeveloperWeek
<jcastro> koolhead17: heya
<koolhead11> hola jcastro :)
<jcastro> https://code.launchpad.net/~jorge/charms/oneiric/owncloud/upgrade-to-3
<jcastro> owncloud bumped to 3,0 and there's my branch to upgrade the charm
<jcastro> however I can't test it today, I'm having juju problems of some kind
<jcastro> so I haven't proposed it yet but if you're looking for an easy upgrade/fix it just needs to be tested that it works.
<koolhead11> jcastro: cool!! lemme check it,. i will need sometime.
<jcastro> no rush, I just wanted to make you aware that it's there
<koolhead11> jcastro: super :)
<m_3> jcastro: yup... _and_ a talk tomorrow afternoon at mongo-boulder
<jcastro> man, look at you, unstoppable
<koolhead11> m_3: hi there
<m_3> hey koolhead11!
<koolhead11> jcastro: you were working on that ubuntupad about juju-charm school content
<koolhead11> how are we planning to use it
<m_3> wait, weren't you 17?  where'd 11 come from?
<jcastro> he must have lost 6 kool points
<koolhead11> m_3: am at work so am 11 :D
<jcastro> hey SpamapS
<jcastro> is there a flag for charm get unofficial unfinished charms?
<jcastro> like say, django?
<SpamapS> jcastro: no, bzr branch those
<jcastro> k
<SpamapS> jcastro: if we want to support that in charm get, we'll have to support showing all 9 branches that matched the given word...
<jcastro> I have some charm workflow questions, can we chat later?
<SpamapS> jcastro: I'm not averse to accepting random bzr branch urls in charm get.. just for consistency.
<SpamapS> jcastro: certainly!
<jcastro> SpamapS: or m_3: You guys know offhand the status of the django charm?
<m_3> jcastro: nope, haven't checked on it in a while
<dpm> jcastro, is the django charm we're talking the same one noodles created on http://micknelson.wordpress.com/2011/11/22/a-generic-juju-charm-for-django-apps/ if so, I can ask him directly
<jcastro> yep
<dpm> jcastro, SpamapS, so I was trying to set up a django server on a Canonistack instance, and I couldn't get past the 'bootstrap' step. Would you have a few minutes to give me a hand? Here's what I'm getting:
<dpm> http://pastebin.ubuntu.com/824142/
<jcastro> adam_g: might need your help on this one too ^
<dpm> the access key is from:
<dpm> https://pastebin.canonical.com/59146/
<adam_g> dpm: you need to specify:
<adam_g>     ec2-uri:  http://91.189.93.65:8773/services/Cloud
<adam_g>     s3-uri: http://91.189.93.65:3333
<adam_g>     ec2-key-name: keypairname
<dpm> adam_g, where can I find the 'keypairname' I should specify there?
 * dpm is new to cloud
<SpamapS> you don't actually need ec2-key-name
<dpm> so I should only specify the first two lines above, then?
 * m_3 found the perfect sick-food... meatballs-n-spaghettios
<SpamapS> m_3: yes that is the perfect food to make you sick
<etneg_> well nice logo
<etneg_> :D
<etneg_> just noticed ensemble went through a change
<jcastro> SpamapS: I can chat any time you want.
<jcastro> etneg_: yeah we're growing up!
<etneg_> come a long way
<jcastro> SpamapS: I am just brainstorming so we don't need to bother m_3.
<SpamapS> "What a long strange trip its been"
<etneg_> i wasnt around the time juju was announced else i'd have tkane part in creating the logo
<jcastro> that's my way of saying I have really stupid questions and want to limit the people who hear them. :)
<jcastro> etneg_: plenty to do, want to write a charm?
<SpamapS> jcastro: just realized I completely forgot breakfast.. need to eat and then I'll put on my stupid question answering hat and buzz you on G+
<etneg_> more logo stuff?
<etneg_> vectors? sure
<etneg_> code? i'll pass
<jcastro> heh
<jcastro> SpamapS: nod.
<jcastro> SpamapS: I am available for the rest of the day, no calls, no rush, at your convenience.
<etneg_> if you guys plan to modify the existing logo, let meknow
<etneg_> :D
<etneg_> or need icons, let me know
<SpamapS> Does anybody else get a warm fuzzy when a bug they reported and worked around in 2 places gets fixed without a single question from the developer fixing it?
 * SpamapS is a bug reporting STUD
<etneg_> at the time i worked on ensemble's logo i wasnt eexactly vector-compliant, was mostly paper stuff
<SpamapS> https://review.openstack.org/#patch,sidebyside,3575,1,nova/tests/api/ec2/test_cloud.py
<etneg_> either way nice logo
<SpamapS> etneg_: there was a blog post showing the story of the juju logo which included your early drawings I think
<etneg_> ye and mine sucked now that i look at it
<etneg_> too many elements and what not, i think the team was kind to mention it
<etneg_> lol
<EvilBill> hmmmm, re: mac version of juju client - I just found what appears to be an easy-to-use packager: http://s.sudre.free.fr/Software/Iceberg.html
<SpamapS> etneg_: http://design.canonical.com/2011/11/juju-logo/
<etneg_> ye just read that awhile ago
<SpamapS> ok.. eating.. now
<etneg_> i was lipping through ubuntu's design stuff and bumped into that
<etneg_> so dropped by here
<etneg_> lipping/slipping
<etneg_> the juju logo looks prett sweet
<etneg_> gl:D
<hazmat> for giggles, a graphviz output of the entire charm universe with dependencies lined out.. http://kapilt.com/files/charm-graph.png
<m_3> hazmat: nice
<jimbaker> graphviz, at least hierarchical drawing, is definitely not a good way to render it... but still useful to see the centrality of some charms like mysql
<m_3> maybe even try twopi?
<jimbaker> twopi probably would do better
<jimbaker> esp since it's not truly hierarchical
<jimbaker> maybe some charms are like that, but not true in general
<hazmat> jimbaker, i'm going to play around with networkx with matplot lib as an alternative
<jimbaker> hazmat, sounds like a plan. there's definitely some interesting graph relations there
<SpamapS> jcastro: G+ yo
<gary_poster> SpamapS, hi.  is your charm tests spec hardened enough for us to try and follow it, and have a reasonably high probability of us not having to rework the whole thing later?
<gary_poster> (From https://code.launchpad.net/~clint-fewbar/charm-tools/charm-tests-spec branch)
<SpamapS> gary_poster: I think so.. I can give you my juju wrapper script that I use to run the example tests if you like?
<gary_poster> SpamapS, cool, yeah thanks
<SpamapS> http://paste.ubuntu.com/824374/
<SpamapS> gary_poster: it doesn't do the deploy-previous yet :)
<SpamapS> gary_poster: and you have to set RESOLVE_TEST_CHARMS=1 to make it do anything at all
<gary_poster> :-) cool, it's a start, thanks SpamapS .  The main idea is for us to have a target for writing tests, and this gives us one.
<_mup_> juju/refactor-machine-agent r450 committed by jim.baker@canonical.com
<_mup_> UnitManager tests
<_mup_> juju/refactor-machine-agent r451 committed by jim.baker@canonical.com
<_mup_> Merged trunk
<SpamapS> gary_poster: indeed, I've written a few more in the course of writing the spec.. definitely need to put get-unit-info into juju itself
<gary_poster> cool
<SpamapS> If anybody is interested in learning a few in's and outs of packaging.. I'll be giving a brief presentation over in #ubuntu-classroom in about 9 minutes
<grapz> SpamapS, I think I've found out about the SSH issue I had yesterday
<grapz> just because I'm being cheap, I was using t1.micro to do testing on, and it seems like it sometimes takes forever to complete the init steps
<grapz> it's been going on for 9 minutes now, and I don't see the 'cloud-init boot finished' on my t1.micro yet (but now I can SSH to it), and the m1.small that I started at the same time, finished ages ago
<grapz> there should maybe be a disclaimer about using t1.micro in the docs somewhere
<SpamapS> grapz: sometimes they work
<SpamapS> but usually no
<SpamapS> I mean, I run my personal website on one... but it gets like, 150 visits a day ;)
<grapz> :)
<grapz> I'll add a line to the 'default-instance-type' in the docs about it being really slow at times - might save someone else from stumbling upon it
<SpamapS> grapz: indeed, perhaps add it as a footnote so it doesn't interrupt the flow.
 * SpamapS goes off to run a few errands
<grapz> good idea
<grapz> So, when I've made a change to the docs, added and pushed the change to my personal branch, I just file a bug against juju with the branch attached, and then it will be reviewed ? (my first time submitting a patch in LP)
<_mup_> juju/refactor-machine-agent r453 committed by jim.baker@canonical.com
<_mup_> Missing file
<m_3> ugh... ec2 is _crawling_ atm
<_mup_> juju/refactor-machine-agent r454 committed by jim.baker@canonical.com
<_mup_> Cleanup
#juju 2012-02-01
<SpamapS> grapz: yes, the method would be 'bzr branch lp:juju/docs foo ; cd foo ; edit stuff ; bzr push lp:~grapz/juju/docs/foo ; bzr lp-propose
<SpamapS> grapz: you can check out the 'lpad' tool that the juju team uses too, it automates a lot of the stuff
<_mup_> juju/refactor-machine-agent r455 committed by jim.baker@canonical.com
<_mup_> Docstrings
<smoser> is there some id that i can get that is uniq to "this unit" ?
<smoser> config-get <this-unit's-id>
<smoser> that would be unique in the global namespace, but consistent for this unit
<smoser> SpamapS, ? (only pinging you because youmight still be around due to $TZ)
<m_3> smoser: there's $JUJU_UNIT_NAME available from within a hook
<smoser> what does that look like ?
<smoser> <charmname>/[0-X]
<smoser> ?
<SpamapS> servicename/[0-9]+
<SpamapS> smoser: the combo of servicename and unit id will never be repeated in a single environment
<m_3> the digits (unit sequence number) are assigned by zk iirc... always increasing
<SpamapS> even if you destroy the service, the next deploy will create the next id higher
<smoser> hm..
<smoser> is there a a completely unique id ?
<SpamapS> smoser: that is completely unique, per environment
<smoser> yeah, but that isn't good enough
<SpamapS> I have thought that the environment needs a UUID
<smoser> that wouldn't work either
<SpamapS> so that you could be specific when destroying it
<smoser> my "this unit" needs a UUID
<smoser> the reason is that i'm poking around with things that are globally (linux) namepsaced
<SpamapS> smoser: env UUID + unit id == unique forever
<smoser> yeah..
<smoser> but is there an env UUID ?
<smoser> next question..
<smoser> how do i debug hooks ?
<smoser> :)
<smoser> http://askubuntu.com/questions/81818/failure-to-troubleshoot-a-juju-charm-deployment/82170#82170
<_mup_> Bug #82170: Welkom in Ubuntu 6.10 <Ubuntu-NL Website:Fix Released> < https://launchpad.net/bugs/82170 >
<smoser> but 'juju debug-hook' doesn't give me $JUJU_UNIT_NAME in my environment
<SpamapS> smoser: because you're not in a hook yet
<SpamapS> smoser: make a hook execute and you will have it
<smoser> how would i do that ?
<SpamapS> smoser: change a config setting, upgrade-charm, relate something..
<SpamapS> debug-hooks is a little broken in that you can't debug install
<smoser> so, lets just pretend tha a crazy friend of mine was trying to get through an install
<smoser> and wanted to debug that
<SpamapS> smoser: what I've done, is have no install hook, deploy, then call install from upgrade-charm and just iterate using upgrade-charm
<m_3> smoser: recommend installing in smaller functions... call from config-changed to debug them... then call them from isntall for real
<m_3> so same thing with config-changed
<smoser> yeah.
<smoser> k
<smoser> SpamapS, to be clear, there is no ENV_UUID ?
<SpamapS> My usual upgrade-charm hook is to call hooks/stop && hooks/install && hooks/config-changed && hooks/start
<SpamapS> smoser: no, there is not
<m_3> couldn't find one through cursory greps through the code
<SpamapS> No its just an idea I have that it will be needed some day
<SpamapS> and would be useful for being careful with destroy-environment
<SpamapS> and ultimately could be a better replacement for control-bucket
<m_3> perhaps instance-id + unit name?  even that's pretty lame
<m_3> instance-id's avail in juju status... it's hard to parse tho... gotta map service unit -> machine -> instance-id
<smoser> can i get environment-name ?
<SpamapS> not that I know of
<SpamapS> smoser: what exactly is the purpose of this?
<smoser> nova-volume creates volume groups
<smoser> volume group names are global namespaced
<smoser> so if i have 2 units in local provider that try to create a volume named "my-volume" then one will fail
<SpamapS> shouldn't nova-volume take steps to prevent that?
<smoser> nova volume doesn't know that these 2 things are on the same linux kernel
<SpamapS> OH that sounds like LXC needs to namespace them
<smoser> right, but lxc does not. they're not namespacd in the kernel
<smoser> (neither are /dev/loopX )
<m_3> smoser: I think you can get the env name from a python script that just imports juju... the environment is pretty much the only thing you can interact with without twisted
<m_3> depends on where you're trying to get it from... the cli or a charm
<smoser> i'm probably over engineering at the moment.
<smoser> but i was willing to accept $ENVIRONMENT_NAME-$JUJU_UNIT_NAME as a uniq identifier in that global namespace
<smoser> and then was going to just create volume groups named: my-volume-group-$ENVIRONMENT_NAME-$JUJU_UNIT_NAME
<smoser> i'll just use JUJU_UNIT_NAME for now
<SpamapS> smoser: why not just plain old uuid ?
<smoser> because then i have to store it.
<smoser> is there some recommended data storage mechanism for charms ?
 * m_3 laughs
<SpamapS> "the filesystem"
<smoser> yeah.
<smoser> suggestion on where?
<m_3> facter?
<SpamapS> smoser: possibly $CHARM_HOME , as that will be removed whenever the unit is destroyed
<SpamapS> smoser: or possibly "anywhere other than $CHARM_HOME" , as that will be removed whenever the unit is destroyed. :)
<smoser> CHARM_DIR ?
<smoser>  /var/lib/juju/units/nova-volume-7/charm
<m_3> otherwise, we usually just dump stuff into the charm's home dir... /var/lib/juju/units/<unit_name>/
<SpamapS> CHARM_DIR, right
<m_3> /var/lib/juju's persistent tho
<smoser> ar ehooks serial ?
<m_3> yes
<SpamapS> m_3: I think we may want to make it clear that that dir is deleted with reckless abandon when units are destroyed..
<SpamapS> its a good place for things you *want* to disappear
<smoser> if its serial, then this is easy ehough.
<m_3> yup
<SpamapS> smoser: all hooks on one unit are serial
<smoser> yeah. thats fine.
<m_3> I bet that'll still be the case with subordinate charms too... it's twisted... single reactor
<jimbaker> m_3, the subordinate charm will be run by an independent unit agent
<SpamapS> yeah subordinates will have to run in parallel with the master
<m_3> hmmm
<m_3> seems like each hook exec should exec atomically tho
<jimbaker> anyway, this running in parallel is really the behavior one should expect
<SpamapS> m_3: nope, subordinates are just service units running on the same box.
<m_3> that would still allow subordinate to negotiate with primary through joined-changed-changed-etc
<jimbaker> m_3, correct
<SpamapS> It actually may present an interesting problem as subordinate usage explodes.. if people have 50 subordinates for all their management bits... add-units may be really painful.
<jimbaker> with the difference being that it's always a paired relationship - you don't see other units of the subordinate for a principal
 * m_3 plans to explode subordinate usage!
<jimbaker> to be clearer, you don't see the events for such units
<m_3> right
<jimbaker> SpamapS, add-unit doesn't make sense for a subordinate
<jimbaker> effectively they just get deployed by their principal
<jimbaker> on the same machine
<SpamapS> jimbaker: sure it does.. when you add-unit the master service, you have to deploy all the subordinates too
<SpamapS> thats what I mean.. the deploy is going to be taxing
<jimbaker> SpamapS, of course in that sense
<SpamapS> in fact
<SpamapS> apt-get will bomb out
<jimbaker> caching will be important
<SpamapS> no, apt-get will *fail*
<SpamapS> it has a lock file, and no "apt-get -- when something else is done apt-getting"
<SpamapS> we'll have to use aptd
<jimbaker> yes, i see your point
<jimbaker> SpamapS, given that subordinate charms are explicitly that (subordinate: true iirc is what's been decided), it should be part of the policy in subordinate development that they don't try apt-get in parallel
<jimbaker> SpamapS, by aptd, you mean aptdaemon?
<smoser> jimbaker, he probably did, yes.
<smoser> (man aptd)
<jimbaker> smoser, must be. not familiar with it. i wonder if it can be relatively transparently with apt-get or some other cli
<jimbaker> aptdcon. not certain if it waits for the package to be installed before returning. anyway, we can make it work :)
<jimbaker> also the python api to aptd probably gives us some flexibility in working with transactions
<m_3> grrrrr peer relations bite
<m_3> waaaay too many relations to resolve... gets downright chatty
<jimbaker> m_3, i wonder if we will expand scoping to help with some of this chattiness. certainly something worth discussing at the next uds
<jimbaker> in budapest, one obvious thing we discussed was scoping relations so they would only chat if in the same geographic region
<jimbaker> stacks also provide obvious scope
<SpamapS> jimbaker: yes I meant aptdaemon
<jimbaker> SpamapS, looks like it should be workable. just something subordinates would need to know. can aptdaemon work at the same time as one apt-get?
<jimbaker> (to avoid reworking principals too)
<SpamapS> jimbaker: It will return a failure to the client
<SpamapS> jimbaker: IMO, the idea that subordinates should be special is absurd, and it will be plainly obvious once the feature is complete.
<SpamapS> jimbaker: ceph is a good example of something that will have to be a subordinate and a principal
<SpamapS> jimbaker: anyway, what I think we'll do is move to a declarative package installing plugin that will talk to aptd
<jimbaker> SpamapS, subordinates can have normal relations, so it's probably not so extreme. but the apt stuff certainly needs to be resolved
<SpamapS> jimbaker: and things that directly call apt-get will be flagged as broken
<jimbaker> maybe a role for charm helpers
<SpamapS> perhaps
<jimbaker> SpamapS, anyway, sounds good about the evolution of charms away from directly calling apt-get
<SpamapS> if for no other reason than to abstract apt-get away
<SpamapS> we could certainly just start doing it in metadata.yaml and have a charm helper that does it until juju does it natively
<SpamapS> sort of like ucf does for packages that want to support 3-way merges for conffiles
<jimbaker> SpamapS, i like the idea
<gmb> Hi folks. Is there any way, from within a charm's hooks, of getting the default values for a given set of config variables? (As opposed to whatever was passed to `juju set` last)
<nijaba> gmb: not that I know of, but you do have your config.yaml copied in /var/lib/juju/ as part of your charm if I am not mistaken.
<gmb> nijaba: Ah, that's useful to know, thanks.
<nijaba> np
<koolhead18> hi all
<koolhead18> jcastro: around?
<uksysadmin> hello all
<uksysadmin> anybody experienced with juju w/openstack w/keystone?
<uksysadmin> I'm getting 401 back when nova client and euca2ools work fine
<smoser> uksysadmin, sorry, id ont have expereience with it.
<smoser> can someone verify for me that there is no "destroy-thyself" hook
<smoser> i'm somewhat in need of one to reliably be able to do 'juju destroy-environment' on the local host.
<smoser> as my charms are setting up looopback devices that have to be unsetup before 'rm -Rf' (lxc-destroy) will work.
<SpamapS> smoser: I'm convinced that you need LXC to namespace vg's or this just isn't going to work.
<smoser> that wouldnt help.
<smoser> not for this.
<smoser> lxc-destroy would still not work
<smoser> something is going to have to do 'losetup -d that-loop-device'
<smoser> or rm -Rf /that/path
<smoser> is not going to be allowed
<SpamapS> Yeah we really need units to have a destroy hook
<smoser> so either lxc-destroy would have to be taught to find open filehandles that were associated with loop devices (which lxc doesn't by default allow access to loop devices, so that would be strange)
<smoser> or i want a destroy-me hook, where i'd cleanup myself nice
<SpamapS> smoser: this is where we go "juju does not do anything with storage" and leave you confused. ;)
<smoser> well, this particular bit is really completely bogus.
<smoser> granting /dev/loop* access to the container is quesitonable at best
<smoser> and i realized, that the volume-group thing is probably a result of the container being able to see /dev/loop*
<smoser> if it could only see /dev/loop0 (or loop1) then vgscan probably wouldn't see the other volume group in a different container with that name.
<smoser> and wouldn't complain.
<smoser> but i dont know if that owuld just cause issues later... who knows.
<SpamapS> smoser: it sounds to me like lvm inside containers is dangerous.
<smoser> probably.
<smoser> root inside containers is dangerous
<smoser> :)
<smoser> but that has been hand waved at
<SpamapS> smoser: thats from a security standpoint. I'm looking at it from a "not f***ing your data" standpoint
<smoser> i think its the same thing.
<smoser> running arbitrary code as root in a container that can leak , is at risk to fsck your data
<smoser> period
<smoser> but i do think the volume groups are probably ok.
<smoser> as a vg* commands inside the container are not going to have access to block devices outside.
<uksysadmin> np smoser
<gmb> Hi folks. We're in the process of trying to write some tests for one of our charms... http://bazaar.launchpad.net/~charmers/charm-tools/trunk/view/head:/doc/source/charm-tests.rst refers to a "get-unit-info" utility. Whereabouts can I find that?
<james_w> SpamapS keeps it hidden somewhere about his person at all times
<gmb> Hah.
<SpamapS> gmb: its embedded in the mediawiki and mongodb charms right now. I'm thinking of creating a new 'juju-tools' project to stick things like this in.
<gmb> SpamapS: Thanks.
<gmb> (And that would be cool)
 * SpamapS does that now
<SpamapS> I could put it in charm-tools.. but this feels like something else.
<robbiew> SpamapS: like teen spirit?
<robbiew> lol
<SpamapS> robbiew: we're just stupid.. entertainers..
<robbiew> here we are now
<hazmat> SpamapS, isn't that what charm tools is for?
<koolhead17> hi all
<koolhead17> does BZR requires a specific port for bzr branch command?
 * koolhead17 hates firewalls
<koolhead17> :(
<SpamapS> hazmat: no, charm tools is for charm oriented tools.. juju-tools is stuff to enhance the juju experience :)
<SpamapS> hazmat: ideally charm-tools dies one day when juju has full functionality. :)
<SpamapS> koolhead17: it works with launchpad over ssh
<SpamapS> koolhead17: tho you can do readonly checkouts via HTTP/HTTPS, I think
<koolhead17> SpamapS: so essentially i need to SSH port working for the same.
<niemeyer> Is anybody working on a charm and intends to push changes in the next 10 minutes?
<niemeyer> m_3: ping, maybe? :)
 * SpamapS checks for uncomitted deltas in his local repos
<hazmat> the old globally distributed lock ;-)
<SpamapS> nothing uncommitted here
<SpamapS> (and no spelling abiluhtee eethur)
<m_3> niemeyer: nope, I'm at mongo-boulder... the mongo gang mentioned you
<jcastro> SpamapS: hmm, I am debating on wether to post the raw IRC logs of m_3's session
<jcastro> http://irclogs.ubuntu.com/2012/02/01/%23ubuntu-classroom.html
<jcastro> I am not confident that the session missing the interactions and whatnot is useful for people who want to check it out
<_mup_> juju/ssh-known_hosts r486 committed by jim.baker@canonical.com
<_mup_> Merged trunk
<_mup_> juju/refactor-machine-agent r456 committed by jim.baker@canonical.com
<_mup_> PEP8
<m_3> jcastro: yeah, it's weak without the accompanying examples
<SpamapS> m_3: Oh did you have everybody log into juju-classroom ?
<m_3> yup
<jcastro> <3
<m_3> it was either that or an etherpad
<jcastro> ok so I think instead of highlighting past courses, we should just highlight upcoming courses
<jcastro> and bank on the interactive experience instead.
<jcastro> which involves users more and it's more fun
<smoser> anyone: https://bugs.launchpad.net/juju/+bug/875903 ?
<_mup_> Bug #875903: Zookeeper errors in local provider cause strange status view and possibly broken topology <juju:New> < https://launchpad.net/bugs/875903 >
<SpamapS> smoser: I saw that one a few times when I beat the heck out of my machine
<smoser> yeah. i assume its load based.
<SpamapS> smoser: IIRC I think we can just bump a timeout up to avoid hitting it
<smoser> where would said timeout live?
<SpamapS> smoser: something deep within txzookeeper IIRC
<SpamapS> win 57
<SpamapS> doh!
 * SpamapS hates when he reveals how many windows he has open
<Daviey> 57.. that is *weak
<hazmat> smoser, its on the roadmap
<med_> m_3, nice presentation, thanks for making it so.
<_mup_> Bug #925211 was filed: Refactor the machine agent so that unit agent management is moved to a separate class <juju:In Progress by jimbaker> < https://launchpad.net/bugs/925211 >
#juju 2012-02-02
<m_3> med_: right back at you... thanks man
<med_> heh.
<hazmat> SpamapS, i'm a fan of juju-tools
<niemeyer> Good morning everybody
<Tribaal> hi niemeyer
<niemeyer> Tribaal: Heya
<niemeyer> m_3: ping
<vila> SpamapS: DNS lookup failed: address 'store.juju.ubuntu.com' not found , known bug ?
<drussell> how far (if at all) are we along the road of having a decent api for juju?
<SpamapS> vila: yes, you need to add '--repository path/where/your/charms/are local:charmname'
<SpamapS> drussell: juju has a rich cli API that is very stable. :)
<jcastro> hazmat: we're signed up for lisa and I am going to do the submission soonish; I'm going to go for a lightning talk, a talk talk, and a charm school. Do you want to do the talk talk?
<jcastro> (I can do lightning, and we can both do the charm school perhaps?)
<drussell> SpamapS: rubbish for automation tho ;o)
<SpamapS> drussell: shell scripts are the oldest form of automation :)
<drussell> SpamapS: hehehehe
<drussell> SpamapS: are we looking to do anything extra? REST etc?
<SpamapS> drussell: there's a strong desire to have a REST api yes.
<drussell> SpamapS: so a strong desire, but it's quite a way off?
<drussell> SpamapS: is there any kind of roadmap available?
<SpamapS> drussell: resources are severely limited and dedicated to providing features necessary for production usage. REST is just a nice to have.
<SpamapS> lol
<drussell> SpamapS: "lol == no" then :o)
<SpamapS> drussell: the link in the topic is a decent view of whats being worked on now
<drussell> SpamapS: sure, have looked at the bug view, thanks
<hazmat> jcastro, sounds good solo talk talk, duo school
<jcastro> hazmat: ok you submit the talk and I'll submit the other 2. the important dates are here: http://www.usenix.org/event/lisa12/
<jcastro> the deadline isn't until May but if we can do this by say end of next week at the latest we'll be good to go.
<hazmat> jcastro, thanks
<benji> drussell: my team is building a charm using Python scripts instead of bash and have the beginings of some thin (but helpful) wrappers around the juju commands
<hazmat> benji, cool
<koolhead17> hi all
<_mup_> juju/rest-agent-api r404 committed by kapil.thangavelu@canonical.com
<_mup_> notes on a rest api
<hazmat> m_3, drussell fwiw, these where the notes/spec i came up with on a rest api http://bazaar.launchpad.net/~hazmat/juju/rest-agent-api/view/head:/juju/rapi/rest-api.txt
<_mup_> Bug #925567 was filed: restarted unit agents do not restore relation presence nodes <juju:In Progress by fwereade> < https://launchpad.net/bugs/925567 >
<koolhead17> https://juju.ubuntu.com/charms?action=edit&template=SlideTemplate  says you are not allowed to edit, is this place supposed to keep custom slides design
<m_3> hazmat: awesome... looking now
<m_3> hazmat: thanks for including the classic restful version of the api too
<m_3> hazmat: I love the fact that the /commands part should make it pretty quick to get a web-based api working
<hazmat> m_3, indeed, the cli api effectively encoded as rest, its pretty useful.. also it becomes the backing for the cli
<hazmat> works well for automation
<m_3> awesome
<drussell> benji: thanks for the infoQ!
<drussell> hazmat: nice! thanks for that, will take a look
<SpamapS> koolhead17: we don't let everyone edit the wiki because it is raw HTML ...
<koolhead17> SpamapS: yeah. so i was wondering are we planning to put custom slides template there?
<koolhead17> i have to make slide for my talk so i thought i could you the custom one :P
<hazmat> niemeyer, not working with ethernet?
<niemeyer> hazmat: It hasn't disconnected, but the speed is _awful_
<niemeyer> hazmat: Can't hear a single word
<niemeyer> I'm trying to do some jujuing here
<hazmat> niemeyer, hmm.. okay.. good luck
<arosales> m_3: thanks for filling in and representing a MongoDB@Boulder, did you get some good interest?
<m_3> yeah, great audience level-wise... asked great questions
<m_3> make a plea for more mongo-charmers help to extend the charms we have
<m_3> s/make/made/
<arosales> m_3: cool, sounds like you wer able to get a MongoDB with replicaset up and running
<m_3> yup... we had two environments going
<m_3> one I set up ahead of time with a mongo replset (3 nodes) and a node.js app fronting that
<arosales> nice
<m_3> one I spun up from scratch right there
<m_3> we ran out of time for the second one to come up completely
<arosales> sounds like you had some good stuff to show.
<m_3> yeah, I think it went over well
<arosales> hopefully folks will be enticed to join the mongo-charmers
<SpamapS> m_3: I was excited to charm MediaGoblin because it uses MongoDB .. but then they announced they're going to SQL ;)
<m_3> maria though?
<SpamapS> m_3: no, sql alchemy... so in theory, any sql
<SpamapS> m_3: I believe they're targetting sqlite and pgsql
<vila> SpamapS: oh, so 'local:' is to force juju to use --repo ?
<SpamapS> vila: right
<vila> SpamapS: How do I create a repo for precise ? Put it somehwere with a precise dir underneath and then one dir per charm ?
<vila> SpamapS: Or is there a simpler way ? (I need the keystone charm and probably the other openstack related ones but keystone first)
<SpamapS> vila: right thats how
<hazmat> m_3, SpamapS an experiment in generating test plans
<hazmat> http://paste.ubuntu.com/826934/
<hazmat> that's just from the dependency set not from the providers per se
<hazmat> there's some ambiguity in our dependency graphs that becomes apparent
<hazmat> 'nova' satisified by nova-cloud-controller and nova-volume
<hazmat> and swift-proxy not being distinguishable from swift-storage
<hazmat> the nova one i think is a bug in the charms
<hazmat> the swift-storage vs. swift-proxy distinction is little harder to make out, they both provide and require 'swift'
<m_3> hazmat: cool man
<hazmat> adam_g, ping.. was wondering if you could shed any light on the 'nova' protocol provided by both nova-volume and nova-cloud-compute
<m_3> so that's just depth-first walking as best you can?
<hazmat> m_3, its a bit more than that
<hazmat> m_3, its doing a combination of every subgraph traversal
<hazmat> so for any given dependency there might be multiple implementors
<m_3> gotcha
<hazmat> each plan represents one of those dependencies be satisified by a different participant, the number of plans gets bigger exponentially
<hazmat> as the depth of the graph increases
<hazmat> i'm thinking just solving each subgraph with one dfs traversal might suffice better
<hazmat> to minimizing the plan set
<hazmat> but that misses some of the benefits of discovering interface changes that break parts of the graph, or discovering broken plans due to metadata
<hazmat> it seems like some of this breakage is because charms can use the relation name as a primary distinguisher so the author doesn't need to pay as much attention to the interface in some cases
<hazmat> at least thats how i assume the nova-volume/nova-cloud-controller both ended up working using the same 'nova' interface
<m_3> that's something we can start trying to catch in chrm reviews
<adam_g> hazmat: do you mean the 'provides' in the metadata of those charms?
<m_3> might be worth trying to match interface first, then do a rough substring match on relation-name to charm-name to disambiguate
<m_3> instead of... what... alphanumer on the charm-name for interface impls?
<hazmat> adam_g, yup
<adam_g> hazmat: IIRC (could be totally wrong here), i cannot create a charm that provides nothing, and only requires. those are just null place holders.  nova-volume and compute dont provide any service from Juju's POV. once they have relations to the database and message queue, they extend the service offered to the user by nova-cloud-controller (that is, the cloud as a whole)
<adam_g> if thats not the case, i can remove the 'provides' section of the metadata
<hazmat> you can provide a charm which provides nothing, but thats not the point i was trying to illustrate, effectively nova-cloud-controller has an optional dependency on a 'nova-volume' protocol and a 'nova-compute' protocol
<hazmat> they plugin different into nova
<hazmat> their distinct protocols and dependencies that can be solved
<hazmat> one can't substitute for the other
<hazmat> adam_g, you still want a provides for the nova-volume and nova-compute charms to model their relation with nova-cloud-controller
<hazmat> just that its distinct for each
<hazmat> right now its got nova-cloud-controller providing 'nova', with nova-compute depending on it, and nova-volume also providing it.. it sounds like you want both -compute and -volume to plugin as optional dependencies into -cloud-controller
<hazmat> which would look more like cloud-controller depending on 'nova-volume' provided by 'nova-volume' and also depending on 'nova-compute' provided by 'nova-compute'.
<adam_g> hazmat: this is for sake of modelling the over all relationship between charms, or the actual services? there is never a direct service relationship between nova-volume and nova-cloud-controller
<adam_g> that is, never 'juju add-relation nova-volume:foo nova-cloud-controller:foo'
<hazmat> adam_g, just trying to establish the posssible relations for a service
<hazmat> adam_g, right now nova-compute for example depends on 'nova' which is satisifed by 'nova-cloud-controller' and 'nova-volume' which is a bit suspicious
<hazmat> adam_g, actually more succintly said.. trying to establish the dependencies for a charm, ie what's required to deploy nova-compute for example
<adam_g> hazmat: so, if i follow, charms should describe the service they are providing: nova-cloud-controller provides 'nova', and charms that require 'nova' (even if there is no direct relationship between the units) should list this as an option requires?
<adam_g> *optional
<adam_g> i'll admit i wasn't giving the provides field much thought in the nova charms because it seemed to not make a difference, but if that needs to be fixed to accomdate external charm tools, ill gladly fix
<hazmat> adam_g, yup, the problem arises in that nova-volume also provides 'nova' when really what it and nova-cloud-controller are providing are distinct, thanks
<hazmat> even without the external tools, the declaration defines the possibility, a human could add those two together, even though that was not the intent, and get something not working, because the charm expectation was different.
<adam_g> hazmat: so, would it be better if: cloud-controller provides cloud-controller:nova and (optionally) requires cloud-compute:nova-compute and instance-storage:nova-volume.  -volume + -compute both require cloud-controller:nova
<adam_g> ?
<hazmat> adam_g, that works
<hazmat> hmm
<hazmat> yup that works, the cycle is fine
<adam_g> cool, running out to lunch. i'll update after
<koolhead17> hi all
<m_3> koolhead17: hey.. good to see you back at 17 :)
<koolhead17> m_3: hehe :)
<koolhead17> m_3: was going through log of juju session. it was great
<koolhead17> though am scared of mongo/node.js
<koolhead17> hifi stuff :P
<m_3> koolhead17: thanks
<koolhead17> okey i might be stupid to ask this but do we assume every user adding a repo via add-apt-repository would be knowing that he needs deps python-software-properties ?
<koolhead17>  
<m_3> the IRC talks are a hard format... it's important to be able to show examples, but ajaxterm (juju-classroom charm) isn't ideal
<koolhead17> m_3: hmm. i would stick to wordpress :P
<m_3> ha!
<m_3> right
<SpamapS> koolhead17: all cloud images have python-software-properties since 10.10
<SpamapS> m_3: I still want to make our infinite-scaling ajaxterm :)
<m_3> SpamapS: yeah, that thought crossed my mind yesterday :)
<koolhead17> SpamapS: i was going to bootstrap my JUJU instance from my oneiric box :D
<m_3> SpamapS: it really would be a great set of charms... separating the "business logic" from the scalable presentation layer
 * koolhead17 is trying to run JUJU on AWS for the first time :)
<m_3> koolhead17: awesome
<koolhead17> SpamapS: thanks. Access Key ID == access key and Secret Access Key == secret key
<koolhead17> m_3: to be honest am using this whole magic AWS infra for first time :PO
#juju 2012-02-03
<koolhead17> okey guys i have juju runnung on my AWS :)
<koolhead17> i just had an issue though which got resolved after i restarted my client machine
<hspencer> koolhead17, nice
<koolhead17> hspencer: hey there. how have you been man?
<hspencer> been workin ahrd
<hspencer> hard
<hspencer> gettin ready for this Euca 3.0 drop
<hspencer> next week dog
<hspencer> its coming out
<hspencer> its lookin good
<koolhead17> cool
<hspencer> i hanve' t had chance to take care of that walrus issue
<hspencer> i was lookin through boto code
<hspencer> cause it works with boto
<hspencer> wanted to get an idea of how it was fixed there
<koolhead17> k
<frankban> hey guys, I am working on a charm: is there a way to rerun the relation-name-relation-changed hook when a config is updated? e.g. from the config-changed hook? I am in a situation where a subscriber configuration value must be sent to the provider charm
<_mup_> juju/trunk-merge r278 committed by kapil.thangavelu@canonical.com
<_mup_> merge trunk
<niemeyer> Good morning everybody
<hazmat> frankban, not at the moment
<frankban> hazmat: thanks
<hazmat> frankban,  you can run other hooks from any given hook, but particular calls to the juju cli api for relations want certain parameters passed either on the cli or environment, normally juju takes care of that when executing them, but its not possible at the moment for other hooks to pass in the correct parameters to drive that usage of the hook cli api (stuff like relation-get, relation-set)
<vila> hi all,
<vila> running 'juju status' seems to spawn tens of ssh connections leading to trashing,
<vila> additionally I can't ctrl-C it and have to kill it
<vila> any idea on how I could debug ?
<_mup_> txzookeeper/errors-with-path r45 committed by kapil.foss@gmail.com
<_mup_> merge trunk
<vila> ha, 'juju -v status' is a bot more verbose, but it mentions timimg out on 3 connections while a 'top' lists tens of processes
<vila> s/bot/bit/
<vila> the machine have trashed for a few and finally the ctrl-c has been taken into account
<vila> ok, fixed, my ssh configuration was broken recently
<SpamapS> vila: you can run 'juju open-tunnel' to keep an ssh tunnel open
<vila> SpamapS: ironically I broke my config when I grew bored of warnings about not being able to install a tunnel already in place and commented out too much ;)
<SpamapS> Attention juju LXC users on 11.10, this stable update needs verification: bug #922645
<_mup_> Bug #922645: ubuntu template should fail on error <verification-needed> <lxc (Ubuntu):Fix Released by serge-hallyn> <lxc (Ubuntu Oneiric):Fix Committed> < https://launchpad.net/bugs/922645 >
<_mup_> Bug #926146 was filed: destroy-environment says: 'NoneType' object is not iterable <juju:New> < https://launchpad.net/bugs/926146 >
<koolhead17> hi all
<koolhead17> SpamapS: i was able to execute wordpress charm but add service failed for me :(
<koolhead17> i am trying again now. :P
<koolhead17> jcastro: are you working on some documentation  with the recent juju commands in it
<jcastro> no
<jcastro> I've submitted a branch with some doc fixes for mentioning READMEs for charms though
<koolhead17> does image initialization on AWS takes lots of time?
<koolhead17> :(
<koolhead17> jcastro: was thinking to use charm get command
<jcastro> oh we don't need that
<koolhead17> usage: charm get name_of_charm [ local_charm_repo ]
<jcastro> soon the charm store will land
<jcastro> and we won't need any of that
<koolhead17> ooh. so for the 9th demo i should not use that command? i should simply show them wordpress charm which comes with the example?
<koolhead17> hi hspencer
<jcastro> don't use the example charm, it's out of date
<jcastro> what I do is do the charm get
<koolhead17> ok
<jcastro> and then tell them "we've got some UI things here that will make it easier, so ignore this ugly --repository, etc. part."
<jcastro> and then just tell them it'll be just like apt is
<koolhead17> ok
<koolhead17> hmm. so when i do charm get wordpress, it will bring mysql charm too :)
<koolhead17> now am getting error from AWS dashboard <instance availablity failed> wired
<koolhead17> i was using asia pacific since its nearer
<koolhead17> instance reachablity check failed
<hspencer> hey koolhead17
<hspencer> how you doin?
<koolhead17> hspencer: am good thanks
<negronjl> SpamapS: thx for adding me :)
<SpamapS> negronjl: np, I'll explain what it is shortly. ;)
<negronjl> SpamapS: perfect ... already poking at it too :)
<janimo> niemeyer, hi, you mentioned that lpad needs to sign all requests so a Noop Auth is not enough as an anonymous login method. But from LP docs I understand that one should be able to get public data without needing to authenticate.
<janimo> The latter seems useful to me and simple to add
<niemeyer> janimo: What I said is in agreement, I believe
<niemeyer> janimo: I said we want to send the consumer of the data
<niemeyer> janimo: That's not authentication
<niemeyer> janimo: It requires us to sign the data
<janimo> niemeyer, ah I misunderstood. But would LP not answer if we did not set consumer?
<niemeyer> janimo: It would..
<janimo> for public data it is the simplest, just as if it was browsed anonymously via a browser
<janimo> I used lpad for a test from GAE and needed the noop auth hack again
<niemeyer> janimo: I don't get what's the problem
<janimo> as it does not use files and making an auth for it's datastore is overkill imo if one only wants public data anony7mously
<niemeyer> janimo: Just use OAuth{Anonymous: true}?
<janimo> ah, I though only ConsoleAuth is to be used not auth directly
<janimo> thanks, I'll check that out
<janimo> I did not read the file thoroughly and assumed OAuth is an interface name only not an implementation
<niemeyer> janimo: Ah, no worries
<niemeyer> Soooo..
<niemeyer> Symbolic links..
<niemeyer> Hmm
<_mup_> juju/unit-stop r422 committed by kapil.thangavelu@canonical.com
<_mup_> draft spec on stopping service units cleanly
<verterok> hi, I'm having some problems trying to deploy on openstack
<verterok> seems to be stuck in: provision:ec2: juju.agents.provision INFO: Starting machine id:1
<verterok> the bootstrap worked ok, then I did a deploy
<verterok> and looking at the debug-log I see this error: EnvironmentNotFound: juju environment not found: is the environment bootstrapped?
<verterok> after that, it keeps logging "Starting machine id:1:
<verterok> any idea what's going on?
<koolhead17> verterok: juju -v status
<koolhead17> lets see what do we get from that
<verterok> koolhead17: http://paste.ubuntu.com/828143/
<koolhead17> verterok: hal-bot ?
<verterok> koolhead17: an irc bot
<koolhead17> verterok: i think it will take sometime for the instance to come up.
<koolhead17> because juju status looks perfect, error free :)
<verterok> koolhead17: this is the debug-log: http://paste.ubuntu.com/828146/
<verterok> which has an error, but I have no idea what it means
<koolhead17> verterok: is this custom charm you have written testing?
<verterok> koolhead17: yes, this is a custom charm
<verterok> and a very simple one btw
<koolhead17> verterok: charm is charm!! :)
<koolhead17> verterok: can you try running allready submitted charm from the repository to test if local env works well with it
<verterok> sure
<koolhead17> verterok: juju destroy-enviornment
<koolhead17> and try bootstrap again
<verterok> koolhead17: done and done
<koolhead17> verterok: juju -v status?
<verterok> koolhead17: not ready yet, "Retrying connection: No machines have assigned addresses"
<koolhead17> hmm. verterok unfortunately i have not been lucky yet to get charms running on my inhouse openstack enviornment
<koolhead17> what i did was i used LXC and did all charming on my local system :P
<koolhead17> verterok: pastebin the juju -v please
<verterok> koolhead17: ok, now it's up
<koolhead17> verterok: https://juju.ubuntu.com/
<koolhead17>   juju deploy --repository /usr/share/doc/juju/examples local:mysql
<verterok> koolhead17: status output: http://paste.ubuntu.com/828154/
<verterok> koolhead17: ok, running that charm
<koolhead17> COOL
<verterok> koolhead17: same error in the provision-agent :/
<verterok> EnvironmentNotFound: juju environment not found: is the environment bootstrapped?
<verterok> koolhead17: this is from the debug-log output
<koolhead17> verterok: but the environment is already bootstraped IMO
<verterok> koolhead17: same status: http://paste.ubuntu.com/828158/
<verterok> koolhead17: yes
<koolhead17> verterok: are you using PPA
<verterok> koolhead17: yes
<koolhead17> verterok: would you mind pasting your enviornment.yaml
<koolhead17> also the debug-log your running from same user account!! :)
<koolhead17> as in same envirnment
<verterok> koolhead17: the yaml is: http://paste.ubuntu.com/828167/
<koolhead17> looks perect
<koolhead17> perfect
<koolhead17> verterok: am sorry but my limited knowledge is unable to solve your issue :(
<verterok> koolhead17: the debug-log is the same as with my charm... :(
<verterok> koolhead17: np, thank. I'll ping one of the core devs
<verterok> thanks for the help
<m_3> verterok: hi, did you get your services started on openstack?
#juju 2012-02-04
<_mup_> juju/close-open-port r453 committed by jim.baker@canonical.com
<_mup_> Corresponding fix
<hazmat> verterok, that looks odd, it looks like an s3 access error
<hazmat> verterok, what version of openstack is that?
<SpamapS> hazmat: should be diablo
<_mup_> juju/trunk r452 committed by jim.baker@canonical.com
<_mup_> merge close-open-port [r=hazmat][f=908198]
<_mup_> [trivial] Closing an unopened port, then actually opening it, should work.
<grapz> so, I guess most admins have a set of apps/stuff they want on all servers (fail2ban, bootmail, postfix config, syslog.conf). Has there been any talks about maybe supporting a charm that will be applied to all servers, in addition to the charm you deploy to do this, or is it going to be part of the sub-unit stuff. Or should admins just use puppet for those things ?
<SpamapS> grapz: Yes, the subordinate-charms spec covers this. You will have a system policy charm that you can deploy and relate to all services.
<grapz> SpamapS, cool
<SpamapS> grapz: its actually the main reason for subordinate-charms :)
<grapz> SpamapS, you don't have something like Harvest or bugtagging to find easy to fix bugs? :)
<koolhead17> hi all
<SpamapS> grapz_afk: we haven't been tagging things bitesize, but we should. :)
<_mup_> Bug #926550 was filed: No way to test proposed updates to juju <juju:New> <juju (Ubuntu):New> <juju (Ubuntu Oneiric):New> <juju (Ubuntu Precise):New> < https://launchpad.net/bugs/926550 >
<verterok> m_3: no, it never finished starting the machine
<verterok> hazmat: no idea of the specific version, I can ask on of the sysadmins on Monday
<verterok> is there a way I can get the version via an API?
<SpamapS> verterok: you are taling about canonistack right?
<koolhead17> SpamapS: did some work regarding  php5 bug today
<SpamapS> koolhead17: hm
<SpamapS> koolhead17: be warned, there's a HUGE change coming from Debian
<SpamapS> koolhead17: I think I may have to take the merge for that one
<SpamapS> koolhead17: and we may end up going with PHP 5.4.0 after all
<SpamapS> koolhead17: don't bother with 5.3.9 .. its broken. :-/
<koolhead17> SpamapS: ooh
<koolhead17> SpamapS: will you give me screen when you will do this merging?
 * koolhead17 would love to see the magic
<koolhead17> i was thinking to just add the patch for the assigned bug!! :)
 * koolhead17 was reading up on quilt
<koolhead17> http://paste.ubuntu.com/828932/
<SpamapS> koolhead17: its not magic
<SpamapS> koolhead17: its boring, hard work. ;)
#juju 2012-02-05
<koolhead17> jcastro: around
<lotrpy_> a
<relateable> you guys change the name.
<verterok> SpamapS: yes, I was missing a step. getting a public ip for the instance.
<SpamapS> verterok: ahh, you can also just setup .ssh/config to forward ...
<verterok> SpamapS: thanks, I'll do that.
#juju 2014-01-27
<rick_h_> anyone used the debug-hooks lately? I can't get it to run config-changed on setting a value. It runs on install/boot of the initial service. http://paste.ubuntu.com/6823548/
<rick_h_> trying to debug if I ru the hook manually I get  config-get: command not found and if I try to run config-get I get a error: JUJU_CONTEXT_ID not set
<thumper> rick_h_: hey, around?
<rick_h_> thumper: howdy
<thumper> rick_h_: are you playing with trunk?
<thumper> or 1.16.5
<rick_h_> thumper: env is torn down, was trying to charm up my own app but having too much fun :/
<rick_h_> thumper: latest stable /me goes to check
<rick_h_> thumper: 1.17 trusty
<rick_h_> .0
<thumper> rick_h_: there is a new command
<thumper> juju run
<thumper> which will execute arbitrary code in a hook context
<rick_h_> ok, yea I noticed that was the only command I could complete with juju-<tab>
<thumper> so you can manually run commands
<rick_h_> juju-run $hook ?
<thumper> from the client like this
<thumper> juju run unit/3 hooks/foo
<rick_h_> ah, gotcha
<thumper> if you ssh to the machine
<thumper> you can get the same by going "juju-run unit/3 hooks/foo"
<rick_h_> any idea on why the config hook would fail to run. I've got a couple of logs in the process of that hook and all I'd get are the two lines in the pastebin
<rick_h_> it does run on initial deploy and worked perfectly
 * thumper scrolls up
<rick_h_> I got all happy until I tried to change the value via juju set
<rick_h_> thumper: http://paste.ubuntu.com/6823548/ is all I got on the unit side after doing a set
<rick_h_> which is what led me to debug-hooks trying to figure out why it wasn't doing anything
<thumper> when it calls the hook you should notice in the logs
<thumper> it'll say "calling hook foo"
<rick_h_> right, that's all that shows in the log
<rick_h_> noting else, at least not in the unit-.log
<rick_h_> nothing*
<thumper> how were you changing the value?
<thumper> i've not looked into that code before
<rick_h_>  juju set bookie port=6548
<thumper> nor the filter bit
<rick_h_> picking a diff port each time I tried to trigger the hook
 * thumper goes to look at the source
<thumper> and I take that there is a config value for bookie and port?
<thumper> well, for port in the bookie service
 * thumper goes to read the filter code
<rick_h_> thumper: right, bookie is the service, port is the config
 * rick_h_ goes to look that he didn't do something REALLY stupid
<rick_h_> https://github.com/bookieio/bookie-charm/blob/get-config-working/config.yaml
<thumper> wow this code is confusing
<rick_h_> heh, /me has special breaking powers to make something simple (config change) confusing.
<thumper> holy crap I can't penetrate the obfuscation of this code
<rick_h_> thumper: ok, don't worry about it for now. I've got to afk in a few min and I'll poke at it some more and see if I can debug with the juju-run hint
<rick_h_> maybe I do have something wrong and need to stab it more
<thumper> rick_h_: grab dimitern when you are around in the morning
<thumper> rick_h_: I think he has a good understanding of this
<rick_h_> thumper: rgr, thanks for the help
 * rick_h_ will move bmark.us to juju at some point darn it :)
<meebey> where in juju's sources can I find the list of supported/known series? I would like to add support for other machine distros
<meebey> also I haven't grokked yet what the difference of ubuntu server and ubuntu cloud image is. maybe cloud-init is related which is used as a form of system provisioning?
<thumper> meebey: hey there
<thumper> meebey: much of the juju internals currently expect an ubuntu image
<thumper> meebey: we plan to support other distros/OSes this year
<thumper> but work has not yet started on that
 * thumper has EODed
<meebey> thumper: noticed that already in various places in the code. I dont expect just work, I am interested in developing the support
<thumper> but perhaps poke fwereade, he should be online soon
<thumper> meebey: what is your target distro?
<meebey> debian wheezy
 * meebey is a DD btw
<thumper> ah, at least debian based :-)
<meebey> bootstrap does not need wheezy support for my use-case but machines would do
 * thumper nods
<thumper> definitely poke fwereade :)
 * thumper heading off now
<meebey> ok, thanks
<meebey> .oO( he needs a smuxi-server charm, I should probably write one )
<fwereade> meebey, hi
<meebey> fwereade: do you have hints for me where in the source to look at when I want to add machine support for debian?
<fwereade> meebey, heyhey
<fwereade> meebey, sorry delay, I've been having fun times with an electrician
<fwereade> meebey, so, to get stuff running on other OSs
<fwereade> meebey, cloudinit and upstart are the two things that immediately stick out -- cloudinit to set up the machines, upstart to keep the agents running
<fwereade> meebey, those are the source-specific bits
<meebey> fwereade: cloudinit is performend inside on the created machine image / VM?
<fwereade> meebey, yeah, it's basically the first thing that runs, it usually comes from the cloud's metadata service
<fwereade> meebey, (offhand I'm not sure *exactly* how it gets there on MAAS)
<fwereade> meebey, the maybe more interesting side is actually starting up an instance with an appropriate image
<meebey> fwereade: does it need to download that from somewhere or is it provided from the juju core instance?
<meebey> the cloudinit part
<fwereade> meebey, the images we start already have cloudinit ready to run -- I'm afraid I've never checked exactly what mechanism triggers it
<meebey> so far I have made a ubuntu server install with juju-core and juju-local, to see how things work
<meebey> fwereade: ah ic, where does that image come form then, is it the ubuntu cloud image?
<fwereade> meebey, yeah, exactly
<meebey> alright, this clears some missing bits I had
<fwereade> meebey, the mechanism by which we choose an image is very relevant to you too
<fwereade> meebey, there's an image-metadata-url config setting
<fwereade> meebey, which needs to point at a simplestreams datasource describing what you've got available
<fwereade> meebey, the environs/simplestreams package is where you need to look for that
<fwereade> meebey, and I can't promise that there haven't been some ubuntuisms slipping in there, but the data format is explicitly not platform-specific
<fwereade> meebey, finally you need your charms to know about the "series" you're using, and have those match what you have in simplestreams
<meebey> ok, thanks for the info, this will get me going
<fwereade> meebey, cool
<meebey> yeah that is what exploded first in my face, the unknown series :)
<fwereade> meebey, come talk to us any time, we're keen to help
<meebey> will do, thanks
<fwereade> meebey, fwiw #juju-dev is maybe an even better place to ask about these details
<meebey> didnt know that one exists, I will join
<fwereade> meebey, it's sparsely populated at weekends but there's usually at least one core dev in there any time of day during the week
<noodles775> Hey there, I'm unable to bootstrap or destroy local environment on subsequent runs of juju after initially not entering sudo password: http://paste.ubuntu.com/6825699/
<noodles775> If there's an easy workaround, pls let me know. Also if I should file that as a bug...
<rick_h_> dimitern_: ping, got a sec to help me debug something with a failing config set? thumper thought you might know the code/be able to help a bit more than he could.
<lazyPower> noodles775, which version of juju are you running?
<lazyPower> And I'm aware of this issue, I ran into it last week. I can help clean that up
<noodles775> lazyPower: trunk, the rev is at the bottom of the paste.
<noodles775> Thanks :)
<lazyPower> Ah, i missed that
<lazyPower> ok typically what I had to do was nuke my local provider and recreate it. A few questions: Are you using encrypted HOME?
<noodles775> Yep, I am. I thougth destroy-environment --force would nuke the local provider?
<noodles775> Ah - or manually nuke any lxc's? Let me do that.
<lazyPower> its supposed to, i'm not sure what causes the mis-match - I havn't found the underlying problem yet. But when i removed $JUJU_PATH/environments/local.jenv, $JUJU_PATH/local
<lazyPower> plus the supporting stuff like files in /etc/lxc/auto
<lazyPower> one sec i have a script for this, but its kind of scary. Full of sudo rm -rf's
<lazyPower> https://gist.github.com/chuckbutler/8483204
<noodles775> OK, I'll try that, thanks! Do you have a bug for it? It'd be great to document everything you know.
<lazyPower> be careful with it - reference the file paths and do some investigation
<mbruzek> I am also having problems with juju and local this morning.
<lazyPower> mbruzek, whats the error with local?
<mbruzek> agent-state-info: '(error: symlink /var/lib/lxc/mbruzek-local-machine-1/config
<mbruzek>       /etc/lxc/auto/mbruzek-local-machine-1.conf: no such file or directory)'
<mbruzek> Do you think removing my local.jenv file is advised in this case?
<lazyPower> In this case, i'm not positive that nuking it is correct, but probably what I would do in the situation.
<rick_h_> mbruzek: let me know if you find a fix for that. I've been hitting it as well in trusty and I thuoght it was related to lxc/trusty/etc
<mbruzek> I have a cleanup script similar to lazyPower's and it removes the local.jenv vile
<mbruzek> rm: cannot remove â/home/mbruzek/.juju/environments/local.jenvâ: No such file or directory
<mbruzek> rick_h_, I am also running lxc and trusty
<lazyPower> mbruzek, nuke .juju/local
<mbruzek> With the latest updates
<rick_h_> mbruzek: I saw lxc updates come down the pipe this morning and been meaning to give it another go
<mbruzek> lazyPower, I don't have a .juju/local directory
<lazyPower> oh i see that its a leftover in /var/lib/lxc my mistake
<mbruzek> let me compare our cleanup scripts
<mbruzek> you posted a link back a few lines right?
<lazyPower> Yeah, it should be the same as yours with a disclaimer at the top that its scary
<lazyPower> i've been meaning to rewrite it in python with some environment checking, and re-prompting with what its going to do
<lazyPower> so its not so abrupt in nuking every LXC container, and config
<mbruzek> I see...
<lazyPower> I had thumper take a look at it, and he gave me some extension pointers. If you're interested I'll add you to my trello board and we can pair on it when we're not slammed with tests
<mbruzek> Yes my script does the same kinds of things
<mbruzek> It removes local directory and local.jenv
<lazyPower> THat doesnt work for users with an encrypted $HOME btw - theirs is put in /var/tmp/juju
<mbruzek> Interesting, but my home is not encrypted.
<lazyPower> That was an FYI :)
<mbruzek> rick_h_, I have done all the clean up that I know of for local and I still get the error on symlink.
<rick_h_> mbruzek: :(
<rick_h_> mbruzek: yea, just duped here. I'm looking to see if there's a bug to track then
<lazyPower> There was a post on the mailing list this morning about lxc being a blocker on the 17.1 release
<lazyPower> not sure if its related
<rick_h_> lazyPower: linky? might be
 * mbruzek wonders if he could fix the symlink problem manually
<lazyPower> https://lists.ubuntu.com/archives/juju-dev/2014-January/002042.html
<rick_h_> mbruzek: well I think it has to do with perms of the lxc vs the charm deploy command being non-sudo
<lazyPower> yeah, i was skimming
<lazyPower> doesn't seem to be related :( Sorry
<rick_h_> lazyPower: yea that's the same
<rick_h_> lazyPower: that's the error in the bug we're getting
<lazyPower> ah bueno
 * rick_h_ follows bug
 * mbruzek is tracking the bug too
<mbruzek> rick_h_, I have created the /etc/lxc/auto directory and seem to be getting further.  This problem rings a bell with something I have seen previously.  Is it possible the directory is not getting created by the package ?
<mbruzek> Obviously I did not delete this directory so I am wondering how I could have got this error before
<rick_h_> mbruzek: not sure, can see if sinzui got any info on the bug after he posted it and knows anything more?
<mbruzek> Yeah I will talk with sinzui
<sinzui> mbruzek, I am testing a fix for your issue now. I hope it will be the very version I release as 1.17.1 in a few hours
<mbruzek> OK my local provider is making it much further now
<melmoth> jamespage, i m having some problem with the ceph-osd charm https://pastebin.canonical.com/103619/ i do not find where is the call to "sgdisk .... --change-name" implemented.
<avoine> mbruzek: are you using trusty?
<mbruzek> avoine, yes I am
<jamespage> melmoth, oh - that's somewhere in the upstream code under ceph-disk-prepare
<avoine> yeah in new version of lxc containers use configuration option for the  autostart instead of symlinks
<avoine> in fact it use both
<avoine> mbruzek: but I think there is a bug in lxc that prevents us from starting machine at all
 * rick_h_ and mbruzek breaking things early and often :)
<rick_h_> ooh, that doesn't sound helpful
<mbruzek> After adding that "auto" directory I am seeing my charms deploy on local
<avoine> also the old symlinks not being clean up bug is fix in last version of golxc: https://bugs.launchpad.net/golxc/+bug/1238541
<_mup_> Bug #1238541: Local provider isn't usable after an old environment has been destroyed <intermittent-failure> <local-provider> <golxc:Fix Committed by patrick-hetu> <juju-core:Fix Committed by wallyworld> <https://launchpad.net/bugs/1238541>
<rick_h_> avoine: yea, that was better this morning
<avoine> mbruzek: so it work?
<mbruzek> it looks like it, let me run a test
<avoine> I have: lxc-start: failed to create cgroups for ...
<avoine> when I try to start the container created by juju manually
<mbruzek> avoine, I have a script that cleans up my local/lxc environment possibly working around that bug
<mbruzek> I can interact with the memcahced charm normally.
<mbruzek> I think this is working for me.
<lazyPower> Thats awesome man
<lazyPower> mbruzek +1
<avoine> it must be something with my installation then
 * rick_h_ can test it out
<mbruzek> Are you still having a problem?  I can share my cleanup script if you want
<rick_h_> mbruzek: what auto directory did you have to create?
<rick_h_> mbruzek: and did you sudo bootstrap or non-sudo bootstrap?
<mbruzek> /etc/lxc/auto
<mbruzek> I did sudo bootstrap
<rick_h_> mbruzek: so that's a root dir or a user dir?
<rick_h_> /etc/lxc/auto
<mbruzek> I saw the notes that we may not have to but I did not know if I had the right code to try it without sudo
<mbruzek> root
<rick_h_> mbruzek: cool, my charm I was working on this weekend is running the install hook so looking good so far
<rick_h_> avoine: ^
<mbruzek> rick_h_, that is great
<rick_h_> so sounds like that directory is the bugfix that should be in sinzui's stuff he's testing so hopefully we'll be good to go when 17.1
<rick_h_> hmm, and my config change fails there as well.
<rick_h_> mbruzek: do you still have your env up? Can you try to juju set something and watch the log and see if it runs/works through the config-changed hook?
<dimitern_> rick_h_, sorry, i'm here now
<noodles775> mbruzek, lazyPower: fwiw, I created a separate bug 1273295 for the issue i was seeing - thanks for the work-around.
<_mup_> Bug #1273295: Cannot bootstrap or destroy local env after cancelling bootstrap <juju-core:New> <https://launchpad.net/bugs/1273295>
<lazyPower> Happy to help
<rick_h_> dimitern_: on a call but I'm having an issue where I don't get my config-changed hook to run when I juju set xxx, I only get http://paste.ubuntu.com/6826409/ but the config-changed hook is run after install so the hook is good at least once
<dimitern_> rick_h_, juju set or relation-set inside a hook?
<rick_h_> dimitern_: just ran juju set from cli to test out changing the value after it's brought up
<dimitern_> rick_h_, so, you're saying config-changed fired once after install, but not when calling juju set
<dimitern_> rick_h_, have you tried debug-hooks to intercept any hook (and possibly run relation-set foo=bar) ?
<rick_h_> dimitern_: yes, the debug-hooks didn't catch anything
<dimitern_> rick_h_, is the unit in error state? is the service alive or dying?
<rick_h_> dimitern_: alive and running
<dimitern_> rick_h_, can you paste the whole uniter log?
<rick_h_> uniter log = machine log? or unit log? or different log?
<dimitern_> unit log
<rick_h_> dimitern_: k, after call will do
<dimitern_> rick_h_, cheers
<lazyPower> marcoceppi, http://paste.ubuntu.com/6827018/
<lazyPower> Opening a bug, but curious if you had an immediate response to the behavior
<marcoceppi> lazyPower: that could be very very bad
<lazyPower> marcoceppi, just had a thought, i'm renaming my deployed services and this could be related.  deploy('mongos', charm='mongodb')
<marcoceppi> lazyPower: interesting, I think I know the issue
<marcoceppi> raise a bug, no time to look today
<lazyPower> got it
<rick_h_> dimitern_: http://uploads.mitechie.com/lp/unit-bookie-0.log is the file. It's got a background thing spewing log stuff. I did a fresh juju set and pasted the command and some white space at the end of the log
<rick_h_> dimitern_: if you go to the start of the file and seach down can see the config-changed hook running and output logging info during install
<dimitern_> rick_h_, looking
<avoine> I think something is wrong with the stream server: https://streams.canonical.com/juju/tools/streams/v1/index.json
<lazyPower> marcoceppi, https://bugs.launchpad.net/amulet/+bug/1273312
<_mup_> Bug #1273312: CharmStore.py failure on checking available relations <Amulet:New> <https://launchpad.net/bugs/1273312>
<dimitern_> rick_h_, is this the complete log?
<rick_h_> dimitern_: yea, scp of the unit log from the machine and uploaded to you
<dimitern_> rick_h_, judging by the log the config-changed hook never completed and was not committed
<rick_h_> dimitern_: never completed during the install process?
<rick_h_> so it doesn't re-run on juju set?
<dimitern_> rick_h_, no, the install hook ran and finished, then it was committed to the local uniter state which tracks which hooks it has to run next
<dimitern_> rick_h_, but the config-changed hook seems to keep running and never returning anything, hence the uniter is still thinking the hooks has not completed
<rick_h_> dimitern_: ah, gotcha
<rick_h_> ok, I'll go look at my hook more. That gives me a hint as to how to debug
<dimitern_> rick_h_, can I take a look at the charm source?
<rick_h_> https://github.com/bookieio/bookie-charm/blob/get-config-working/hooks/config-changed
<rick_h_> dimitern_: ^
<dimitern_> rick_h_, ta
<dimitern_> rick_h_, that make stop and make run at the end look like it's blocking on doing something
<rick_h_> dimitern_: yep that's probably it
<rick_h_> I'll have to look into how it works
<dimitern_> rick_h_,  ok, glad i could help :)
<lazyPower> Question about implementation of an amulet test, is it considered sliming the test if i use pymongo to validate a connection or should i run a mongoc shell via a sentry?
<mbruzek> lazyPower, sliming?
<lazyPower> when you slime, you're fudging a test case to get it to pass to test something, whether its related or not.
<bloodearnest> is there a way when deugging a dev environment, to inspect the relation data for a relation?
<bloodearnest> other than inspecting logs
<bloodearnest> like querying mongo or some such
<bloodearnest> unless there's a simple way that I've somehow missed
<lazyPower> bloodearnest, debug-hooks is great for this, you can use relation-get to fetch the key/val you're looking for.
<bloodearnest> lazyPower: yeah, I'd like to do it after-the-fact if poss
<bloodearnest> debug-hooks is good, but has a set up cost (not just command, you need charms in correct state to perform the hook)
<lazyPower> I'm not aware of any method to retrieve the relationship data after you've exited the hook-scope.
<lazyPower> what I do in that scenario is i cache it in the $CHARM_DIR/.relationship_name
<lazyPower> so i can reference it in subsequent hook runs outside of that scope, and maintain idempotency
<lazyPower> bloodearnest, https://github.com/chuckbutler/errbit-charm-chef/blob/master/hooks/cookbooks/errbit/recipes/errbit-relation-changed.rb#L7
<bloodearnest> lazyPower: yeah, I've thought of doing similar
<meebey> can I have a 2nd juju local server or is that limited to a single system/box?
<meebey> I could make another juju install, but then I can't make relations between them
<jcastro> meebey, you can have multiple environments on the same box
<jcastro> the bummer is we can't do cross-environment relations yet
<jcastro> so if you don't need things to talk to each other you can do multiple local providers
<meebey> ic
<meebey> so openstack is the only private cloud option then, right?
<jcastro> or just raw MAAS
<jcastro> but you probably want this
<jcastro> https://juju.ubuntu.com/docs/config-manual.html
<jcastro> that will let you deploy to any server with ssh
<jcastro> however, there be dragons with manual provisioning, it's not ready yet last I checked
<meebey> hehe
<jcastro> I'll check on manual progress when thumper comes online in a few hours
<jcastro> we need to send a status report on that feature to the list anyway
 * thumper looks at jcastro
<jcastro> speak of the devil!
<thumper> jcastro: I told axw to look at it to confirm validitiy
<thumper> as he is the source of all that
<thumper> please chase with him
<jcastro> thumper, ack
<jcastro> arosales, another thing I noticed during reviews
<jcastro> is our description on metadata.yaml's tend to be long, with like bullet points and paragraphs
<jcastro> I am cutting those down since the README has all the bling, the description doesn't need to be so epic
<arosales> jcastro, agreed that it should give enough an overview of what the service is and what the  charm does though
<jcastro> yeah, most of them read like the author just copied and pasted the first 3 paragraphs from wikipedia
<jcastro> heh
<arosales> jcastro, the "summary" page doesn't have the readme there, and is the first thing a user sees when the click on a charm
<arosales> jcastro, agreed it needs to be more charm specific and an quick idea of what the service does
<arosales> just incase the charm upstream is new
<arosales> or folks haven't heard of the project, but agreed it shouldn't only be a copy of the wikipedia artical but summarizing the wikipedia article and providing and charm overview would be a plus.
<jcastro> http://imgur.com/hjFSTnu
<jcastro> The user won't even _see_ that page. :)
<arosales> jcastro, which page?
<jcastro> the charm store page
<arosales> ah
<arosales> well
<jcastro> man, I think our search results are getting worse
<arosales> if they are discovering via google
<arosales> but if they are clicking via juju.u.c
<arosales> they will see it
<arosales> in any case we should very much consider how the metatdata and readme look in jujucharms.com
<arosales> jcastro, the issue if it not having the correct seo juice is whole other issue :-)
<jcastro> yeah
<jcastro> I am just saying, I went to go prove you wrong
<jcastro> but then I ended up on that page
<jcastro> and started crying
<arosales> jcastro, well I suggest we make the policy and readme more specific as to what we are looking for
<arosales> jcastro, policy currently just states, "Must include a full description of what the software does in the metadata."
<arosales> and best practice just states, "Provide an overview of the service in the README and metadata."
<arosales> if I were a new user I would like for that part to be opinionated
<arosales> specifically the policy should state the description should give an overview to what the deployed service is and the highlight of the charm
<jcastro> I was thinking just applying common sense
<arosales> suggest adding to policy the charm should follow the charm tools proof template
<arosales> well common sense to use may not be to some rookie like me :-)
<jcastro> then it ends up looking like this otherwise: http://www.debian.org/doc/debian-policy/ch-controlfields.html
<jcastro> not that there's anything wrong with that. :)
<arosales> jcastro, I don't think we'll go that far, but one could arge debian has had some success so it may be not that bad of an idea to take pointers from
<arosales> jcastro, also did you get any other feedback on the bundle policy from the list?
<jcastro> jamespage, are there any usecases where one might use the "ceph" charm without -osd and -radosgw?
 * arosales should probably propose to the list the above suggested changes
<jamespage> jcastro, sure - just a small three node cluster would just use ceph
<jcastro> jamespage, ack
<jcastro> arosales, no feedback on the policy
<jcastro> I think we can safely go for "must include official charms to be an official bundle" without too much trouble
<arosales> jcastro, ok do you want to make a card for starting to document the charm store bundle policy?
<arosales> jcastro, only feedback I had was the bundle should endevour to be production grade
<jcastro> I could but I am still having a hard time understanding when bundles are supposed to land/work
<jcastro> and we did add bundles to policy already
<jcastro> we just didn't get any feedback on it
<arosales> jcastro, sorry did I miss that in the docs?
<arosales> juju.u.c/docs
<arosales> ah I see the reference now in https://juju.ubuntu.com/docs/authors-charm-policy.html for bundles
<jcastro> yeah remember we made one policy page for both instead of 2
<arosales> makes sense
<arosales> jcastro, do we just need to figure out how to promulgate them
<jcastro> the one gotcha is we still need to make `bundle add readme` and a template
<jcastro> we don't have a template for bundle readmes yet
<jcastro> arosales, they show up in the GUI and everything
<jcastro> you just have to manually make them and edit the file and so on
<arosales> jcastro, were you looking for that to be in charm-tools
<jcastro> yeah
<jcastro> it's on marco's todo, I filed a bug, etc.
<arosales> jcastro, ok, and then once that is made probably need to add a bundle specific section on https://juju.ubuntu.com/docs/authors-charm-store.html#submitting
<arosales> jcastro, thanks could you make sure that bug has a card tied to it, and a card for updating https://juju.ubuntu.com/docs/authors-charm-store.html#submitting
<jcastro> https://juju.ubuntu.com/docs/charms-bundles.html
<jcastro> For submitting that had to be a seperate page
<jcastro> there's a link in the 4th bullet from the page you just linked
<arosales> jcastro, agreed but technically they are not part of the recommended charms
<arosales> ie not owned by charmers
<arosales> so really not getting a chance to be reviewed against the policy
<jcastro> right
<jcastro> because we haven't really "promulgated" any bundles
<arosales> :-)
<jcastro> ie. we're making them, but not putting them in the store
<arosales> jcastro, understood and today I don't find any docs on how to get a bundle promulgated
<jcastro> right, we haven't done that yet
<arosales> agreed, thus the ask for cards :-)
<jcastro> I am confused
<arosales> jcastro, sorry, let me try to clearify
<arosales> 1. we have bundles in our charm store policy
<arosales> 2. we have how to make  bundle (in a personal name space)
<arosales> if we would like to have bundles "recommended" and reviewed the by the ~charmers team then we need to have the bug referenced resolved along with having how to submit a bundle for review documented
<jcastro> ok
<jcastro> I get that
<jcastro> I am just confused on the timeline
<jcastro> Were we trying to get bundles out of namespaces and officially in the store?
<jcastro> I mean, I can barely get any of them deploying right now
<arosales> I think that is really determined by when we can resolve the bug you referenced (btw what is the bug #) and when we have a good process on submitting for review and housing bundles (ie are they owned by charmes too)
<jcastro> rick_h_, hey what was that bug where if one of the instances doesn't come up deployer just stops and doesn't do relations, etc
<arosales> and perhaps why we need a central place to track bugs
<jcastro> meebey, manual provisioning will be in 1.18
<jcastro> so hopefully that'll work out for you
<rick_h_> jcastro: looking
<rick_h_> jcastro: https://bugs.launchpad.net/charms/+source/juju-gui/+bug/1252301 kinda but really I'm not sure it's a bug but working as intended
<_mup_> Bug #1252301: guiserver reports second bundle as failing after the first fails <juju-gui (Juju Charms Collection):Triaged> <https://launchpad.net/bugs/1252301>
#juju 2014-01-28
<hobbyBobby> hi all, struggling to set up virtual box and I have a sos report here if you're willing to help
<hobbyBobby> *with juju
<hobbyBobby> from what I can tell it can't resolve the mongo address because of some networking issue
<hobbyBobby> wow im daft
<rick_h_> thumper: the config hook issue from last night was because I ran a make command that didn't background. So it never 'committed' and wouldn't rerun
<rick_h_> thumper: just fyi, I was a moron that is all :)
<thumper> :)
<melmoth> hola... i m trying to remove a ceph-osd unit. it fails when it try to destroy the realtion with ceph-mon;:
<melmoth> 2014-01-28 10:40:05 ERROR juju.worker.uniter uniter.go:350 hook failed: fork/exec /var/lib/juju/agents/unit-ceph-osd-2/charm/hooks/mon-relation-departed: cannot allocate memory
<melmoth> the cpeh-osd box has 500M of ram
<melmoth> what can i do ?
<jamespage> melmoth, not much - you could do a juju terminate-machine --force #
<jamespage> that will kill it outright for you
<melmoth> i destroyed all environment, and redeployed again with 1G of ram on esd node
<melmoth> jamespage seems to work better
<jamespage> melmoth, yeah - anything complicated needs something bigger than 500M
<jamespage> melmoth, we bumped the default instance memory in our openstack qa environment for that reason
<jamespage> 900M
<bloodearnest> anyone know of any charms that can utilise round-robin DNS?
<rick_h_> bloodearnest: how would that work? Wouldn't you just assign the cnames in dns and point at the exposed front end services?
<bloodearnest> rick_h_: it's for the frontend that I want it :)
<bloodearnest> rick_h_: I was thinking that when a new unit is added, it adds it's ip to the pool of ips in the DNS server
<rick_h_> bloodearnest: oic, so you'd need a dns server charm that could accept a realtion to web app charms then
<bloodearnest> and when the unit is removed, it pops it's ip from the pool
<rick_h_> right
<bloodearnest> rick_h_: in theory, the DNS wouldn'y need to be charmed (as DNS tends to be non-juju-env infrastructure), but it'd need an api for a charm to use
<bloodearnest> I guess
<rick_h_> bloodearnest: https://jujucharms.com/sidebar/~hazmat/precise/aws-route53-2/#bws-readme looks interesting
<bloodearnest> not 100% convinced rr-DNS is the best solution for this problem, just seeing if anything has already been done
<rick_h_> bloodearnest: I think honestly most people just put haproxy in fron of it
<rick_h_> and then use the realtions to that to control adding/removing
<bloodearnest> rick_h_: sure, but this is aimed at scaling 1M+ long-lived https connections
<bloodearnest> s/https/tls, it's not http
<rick_h_> bloodearnest: ah k
<bloodearnest> some http clients don't support rr-DNS well anyway, so we may have to DIY it
<bloodearnest> rick_h_: tell you all about it in Cape Town next week :)
<rick_h_> bloodearnest: sounds good
 * bloodearnest has a list of juju gui feature ideas to throw at someone... :)
 * rick_h_ ducks
<jcastro> negronjl, https://bugs.launchpad.net/charms/+source/mongodb/+bug/1267222
<_mup_> Bug #1267222: Race condition when deploying a simple replica set <audit> <mongodb (Juju Charms Collection):New for negronjl> <https://launchpad.net/bugs/1267222>
<jcastro> hey if you're too busy to fix that if you could post where you think the problem is I can have one of our dudes take a look?
<jcastro> arosales, mbruzek, lazyPower, marcoceppi: I have a newish proposal for the audit
<jcastro> so like, we sorted the list by downloads, but we're now getting into the "not top 10".
<jcastro> IMO we should be smarter on targetting now, so for example next on the list is "phpmyadmin", I'd like to skip that and go to ganglia next.
<marcoceppi> jcastro: sounds good to me
<marcoceppi> feel free to re-order the list as you see fit
<jcastro> like, go after the big services, nothing against myphpadmin
<lazyPower> Agreed, if you need to change my priority queue i have no problems with it.
<jcastro> marcoceppi, I thought about that but I do want to leave it sorted by downloads, because people are using it, and I get that
<lazyPower> jcastro, actually if you want you can assign a batch of 3 or 4 charms so i have "up next" targets
<jcastro> but like, logstash-agent is #93 and that's probably way more important than phpmyadmin
<jcastro> lazyPower, I am putting "should be higher priority" in the notes column
<jcastro> then I'll just start on their readmes, so "follow the green" I guess?
<jcastro> lazyPower, hadoop worked fine for me btw
<lazyPower> thats so strange
<jcastro> is it not working for you at all?
<lazyPower> Which version of juju are you running jorge?
<lazyPower> no, its got an inline hack to parse the machine id, and since my LXC containers do not generate FQDN's its failing at that point in the install hook.
<lazyPower> This may be specific to my setup thats causing the failure, and I feel like this is a growing problem, that probably needs addressed this week.
<jcastro> lazyPower, 1.17
<jcastro> lazyPower, my local provider is broken so I was on AWS
<lazyPower> aaahhh see, i said it works in cloud providers :)
 * jcastro is on trusty
<jcastro> oh, I missed that!
<lazyPower> glad its +1'd though
<lazyPower> the tests for that are going to be hairy
<arosales> jcastro, agreed and some charms may not have had the life span to attract the higher number, but if you know ones we should hit first perhaps highlight those in orange
<jcastro> yeah, like I said, nothing wrong against phpmyadmin, but like, we should be wailing on things like cassandra and so on
<arosales> jcastro, I got your feedback addressed in https://code.launchpad.net/~a.rosales/charms/precise/jenkins/update-metadata-readme/+merge/203181
<arosales> lazyPower, mbruzek good time to do some +1's on the queue this morning
<lazyPower> arosales, already ahead of you :) Working through OpenMRS as we speak
<arosales> lazyPower, mbruzek hpcc and openvpn-* are also good candidates
<mbruzek> arosales, take from the top of the review queue or the red  tracking box in leankit?
<lazyPower> good question ^
<lazyPower> arosales, jcastro can i get another +1 on this? the charm is solid from my testing standpoint. https://bugs.launchpad.net/charms/+bug/1125869
<_mup_> Bug #1125869: New charm: postfix <Juju Charms Collection:New for jose> <https://launchpad.net/bugs/1125869>
<arosales> mbruzek, lazyPower generally from the top as those have been in the queue the longest
<jcastro> looking
<hazmat> bloodearnest, rick_h_ that aws-route53 charm isn't rr dns fwiw.. but route53 can do it, and low latency routing across multiple backends.
<bloodearnest> hazmat: yeah, I'd love to use route53, but it's ec2 only, and my target is openstack. idk of an equivalent/similar service for openstack
<hazmat> bloodearnest, route53 is not ec2 only.
<hazmat> its dns
<rick_h_> hazmat: right, I was thinking more an example of doing DNS tweaks based on relations via a charm
<hazmat> there are some ec2 specific provisions granted, but nothing prevents general use
<hazmat> rick_h_, thats exactly what the aws-route53 charm does
<lazyPower> oh brilliant
<rick_h_> hazmat: right
<hazmat> most of my aws charms back into https://github.com/kapilt/awsjuju .. a couple of other goodies in there
<hazmat> but i'd label it pre-alpha
<hazmat> except the automated backup stuff
<hazmat> which is just alpha :-)
<bloodearnest> hazmat: huh, TIL. Maybe an option then.
<bloodearnest> thanks
<jose> jcastro: did you get to take a look at the postfix charm?
<jcastro> not yet no
<jcastro> I see it on the list though
<jose> well, I'll wait then :)
<jamespage> adam_g, I switched all the icehouse branchs to openstack-charmers btw - https://code.launchpad.net/~openstack-charmers
#juju 2014-01-29
<ianous> Hello. Anyone here?
<thumper> ianous: kinda
<thumper> but just EODing
<thumper> there are others who are around though
<thumper> just ask your questions :)
<thumper> or say hi
<ianous> At this hour...I'm suprised someone's actually here.
<lifeless> ianous: 6pm ?
<ianous> It's 7 am here...
<lifeless> ianous: welcome to the globe
<ianous> *gasps*
<lifeless> I know, right!
<ianous> it's actually nice but I have no idea what timetables people are usually around here.
<ianous> You know..peak hours and sorts
<ianous> Anyone knows btw about using private openstacks with juju? I"m trying to get behind the logic with generate-image and public puckets and all.
<ianous> And I feel kinda lost
<noodles775> marcoceppi: Have you seen this error in amulet before? As if it's got the 2.x urllib module loaded... http://paste.ubuntu.com/6837138/
<marcoceppi> noodles775: yes, this was an issue I think I patched in trunk
<noodles775> marcoceppi: cool, I'll run with trunk again. Thanks.
<marcoceppi> noodles775: yeah, sorry about that it'll be released with that patch this week
<mramm> frankban: you around?
<hazmat> jamespage, i heard you mentioned that deployer is incompatible with 1.17 which isn't true afaics, what's that about?
<jamespage> hazmat, "state watcher was stopped" ?
<jamespage> hazmat, sometime during a deployment, the deployer exits
<hazmat> jamespage, that's fixed in juju-core trunk
<jamespage> with that error message
<jamespage> hazmat, ok - so is juju-deployer compatible with any released version of juju?
<Beret> hazmat, http://pastebin.ubuntu.com/6837352/
<Beret> ok
<frankban> mramm: yes I am
<hazmat> jamespage, well that issue can appear with 1.17 .. 1.17.1 has the fix though.
<jamespage> hazmat, ah - ok - nuding that through now
<mramm> hazmat: ok
<hazmat> and that issue can appear with 1.16.5.. i can try for a work-around.
<mramm> hazmat: I that's the thing that was causing the relations not to be added on some of our demos
<mramm> and why we have been pushing forward to 1.17.1 this morning
<ev> marcoceppi: is amulet breaking with include-base64 a problem you're aware of? We ran into this: https://code.launchpad.net/~doanac/ubuntu-ci-services-itself/amulet-unit-config-fix/+merge/203729
<marcoceppi> ev: I didn't know there was a base64 include option in deployer
<marcoceppi> amulet has no idea about this feature, could you file a bug? I can probably get a fix out this week
<noodles775> marcoceppi: Having trunk amulet on the pythonpath is enough for running tests manually, but not for running via juju test? Do you know what I need there? http://paste.ubuntu.com/6838092/
<marcoceppi> noodles775: yeah, PYTHONPATH isn't pushed in to the testing environment. I'll add that to the whitelist good catch
<ev> marcoceppi: Chris Johnston is on it now
<marcoceppi> noodles775: I've got a patch for that landing in charm-tools now, but I'll have to roll a release of charm-tools for the new test
<marcoceppi> proably won't land until later tonight
<noodles775> marcoceppi: Great - thanks. No rush here, I can keep running tests manually :)
<cjohnston> marcoceppi: bug #1274142
<marcoceppi> cjohnston: ta!
<tomixxx> hi is juju fully installed an a node if i can read : "cloud-init boot finished at Wed,...... +0000. Up 136.3 seconds" ?
<tomixxx> plz help, i have now isntalled juju, do i have to deploy services on my maas-server or on the juju-node?
<tomixxx> if i hit "juju deploy mysql" on the maas-server no answer is prompted from the terminal
<tomixxx> on my node, i can see "cloud1 login:"
<tomixxx> if i enter juju status it prints me nothing.... i get simply no terminal output...
<tomixxx> does someone know what the problem is ? :(
<marcoceppi> tomixxx: what does the output of juju status look like, please put it in paste.ubuntu.com
<tomixxx> there is nothing, only the black background after hitting "enter"
<tomixxx> one of my nodes show me "cloud1 login:" but i dont know what username and passwort is need here...
<tomixxx> do i have to enter "juju status" on the maas-server-terminal or on the node?
<marcoceppi> tomixxx: you enter juju status from where you ran juju bootstrap
<tomixxx> marcoeppi: ok, that is the maas-server then
<marcoceppi> tomixxx: can you please run `juju version` then `juju status --debug` and add the outputs to paste.ubuntu.com?
<tomixxx> marcoceppi: ok
<tomixxx> http://pastebin.ubuntu.com/6838480
<marcoceppi> tomixxx: did you run sudo bootstrap or just bootstrap?
<marcoceppi> sudo juju bootstrap* or just juju bootstrap*
<tomixxx> only "juju  bootstrap"
<marcoceppi> tomixxx: okay, good
<marcoceppi> tomixxx: can you `ssh cloud1.master`
<marcoceppi> err
<marcoceppi> tomixxx: can you `ssh ubuntu@cloud1.master`
<tomixxx> it says "could not resolve hostname cloud.master: name or service not known"
<marcoceppi> tomixxx: do you have dns running on your maas node?
<marcoceppi> tomixxx: paste initctl list | grep maas
<tomixxx> marcoceppi: pastebin.ubuntu.com/6838547
<tomixxx> marcoeppi: i have set the interface of the cluster-controller to be managed "dhcp and dns"
<tomixxx> btw, i have replaced the host name of the nodes from the suggested one (i guess it was 10.0.0.100-00 or sth like that) to "cloud1", "cloud2" and so one
<tomixxx> but this should not be a problem, i guess
<marcoceppi> tomixxx: this sounds like a networking/dns issue with maas. Can you ssh direclty in to the ip address?
<tomixxx> you mean i should try "ssh ubuntu@10.0.0.100.master"?
<marcoceppi> no, 10.0.0.100, if that's the IP address it's allocated
<tomixxx> kk
<tomixxx> seems to work, it asks me "Are you sure you want to continue connecting (yes/no)?"
<tomixxx> and in front it says "The authenticity of host '10.0.0.100' cant be established. ECDSA key fingerprint is xx:xx:..
<tomixxx> should i say "yes" ?
<jcastro> hey the queue doesn't look so bad today
<tomixxx> hmm
<tomixxx> when i open head in "resolv.conf.d" there is nothing in the file
<tomixxx> then i have a file called "original" with search ... and nameserver ... entries
<tomixxx> maybe, sometime in past, i have modified head for some reason?
<tomixxx> could this cause the problem?
<tomixxx> marcoceppi: are u still here?
<marcoceppi> tomixxx: yes, one second
<tomixxx> k
<tomixxx> marcoceppi: i have to go now. But i will be here tomorrow again. if u have some hint for me, you can send me a private message. would appreciate it!
<roaksoax> hi all
<roaksoax> so I deployed a charm from the charm store
<roaksoax> and now I need to upgrade the charm from a local repository
<roaksoax> how can I do that?
<roaksoax> marcoceppi: thoughts?
<roaksoax> roaksoax@pursue:~/test$ juju upgrade-charm --repository . local:hacluster
<roaksoax> error: invalid service name "local:hacluster"
<marcoceppi> roaksoax: you can run --switch
<marcoceppi> juju upgrade-charm --switch --repository . local:hacluster
<marcoceppi> roaksoax: it's very hulk smashish in that it ignores revisions all together
<roaksoax> marcoceppi: ok cool, thanks!
<roaksoax> marcoceppi: roaksoax@pursue:~/test$ juju upgrade-charm --switch --repository . local:haclsuter
<roaksoax> error: unrecognized args: ["local:haclsuter"]
<roaksoax> marcoceppi: got it
<roaksoax> roaksoax@pursue:~/test$ juju upgrade-charm --switch local:hacluster hacluster
<jcastro> hey lazyPower
<jcastro> I saw you touching owncloud
<jcastro> I think the install hook needs an update btw
<jcastro> last I checked the version was woefully out of date
<lazyPower> yeah
<lazyPower> its not providing a version identifier
<lazyPower> Already in my notes, currently on hold while i review OpenMRS
<jcastro> ack
<lazyPower> i may tag it, and after the audit branch it and give it some upgrade lovin
<jcastro> lazyPower, I have this thing where I wish all charms had version options in config
<lazyPower> The picky part of that is if the charm installs from upstream, how do you validate the hash when someone chooses a newer version?
<jcastro> yeah
<lazyPower> mbruzek and I are actually in a conversation about this as we speak
<jcastro> http://download.owncloud.com/download/repositories/xUbuntu_12.04/all/
<jcastro> in this case you could config the enterprise version
<jcastro> the .org stuff has an opensuse build system repo, so there's kind of like 3 options
<mbruzek> Hi jcastro I want more details on how you would implement a version option.
<mbruzek> So the 1.9.7 version of OpenMRS is current, I got a review comment that I need to cryptographically verify the file I download
<mbruzek> Any newer version we would not be able to verify.  Is that OK?
<jcastro> yeah that's a good question
<sarnold> why not?
 * jcastro sees what marco did in phpmyadmin
<jcastro> https://github.com/charms/phpmyadmin/blob/master/bin/parse_upstream
<jcastro> heh
<lazyPower> his comment at the top of the file is priceless
<lazyPower> marco++
 * mbruzek is checking if there is a hash to verify against
<sarnold> oh jeeze; preg_replace(.../e)
<mbruzek> sarnold, was that to me?
<lazyPower> nope, for marco's parser
<lazyPower> line 10
<jcastro> sarnold, the entire thing is glorious
<sarnold> mbruzek: it is my wailing, gnashing of teeth, and rending asunder my clothing
<sarnold> please tell me we don't ship that
<jcastro> lazyPower, wait until you start to find that there are plenty of upstreams that don't sign their releases at all, depressing.
<lazyPower> sarnold, http://sideshowsito.com/hulk_smash_loki.gif
<sarnold> lazyPower: yes please :)
<lazyPower> i feel like that line is about as useful as goto, in specific contexts, very acceptable, but comes with such a stigma that most developers choose to stray away from it.
<lazyPower> and as much as it may be hated, its useful in this context
<jcastro> https://bugs.launchpad.net/charm-tools/+bug/1274255
<sarnold> lazyPower: when the context appears to be "load random content from the internet and then proceed to execute it", it's hard for me to be too sympathetic
<jcastro> anyone have issues with this?
<rick_h_> jcastro: doing that will block ingestion
<jcastro> so I was thinking maybe a good idea would be for config to pass along a URL to an upstream source?
<rick_h_> jcastro: so any charm without a cateogry will not get updated until touched
<jcastro> rick_h_, what do you mean?
<jcastro> oh, as in, you use charm-tools as part of the ingestion process and that would break stuff
<rick_h_> jcastro: if charm proof makes that an error, no charm without a category can get updated in the store. They'll go into an error state
<jcastro> rick_h_, would doing this post-audit make more sense then?
<rick_h_> jcastro: yes, in order for a charm to get past ingestion it needs to pass proof without errors
<rick_h_> jcastro: +1
<rick_h_> jcastro: it's a warning and should be part of getting 'recommended' but not required as an error imo
<rick_h_> policy vs proof basically
<jcastro> rick_h_, I've updated the bug, feel free to add on
<rick_h_> jcastro: cool
<lazyPower> sarnold, ah, i was looking into attack vectors utilizing preg_replace, i see where you're coming from now
<ev> hazmat: does juju-deployer not support ;revno= in bzr branches? Doesn't seem to work
<jcastro> hey sinzui
<jcastro> are there backport-for-12.04 plans for the 1.18.x series?
<sinzui> jcastro, We make backports, but I don't know what the SRU policy will be.
<sinzui> jcastro, there might be blockers such as switching juju to juju-monogo and gccgo
<jcastro> ok I will ask around
<hazmat> ev re deployer revision support.. syntax is   lp:bzr@revno
<hazmat> ev, also works for git repos
<modelife> hi
<ev> hazmat: star! Thanks!
<perrito666> fwereade: hi
<fwereade> perrito666, heyhey
<perrito666> you caught me online :p
#juju 2014-01-30
<jseutter_> does anyone know how to get juju working with hpcloud?  I tried following the instructions on https://juju.ubuntu.com/docs/config-hpcloud.html but get an error saying "cannot create service urls"
<lazyPower> jseutter_, what is your geo set to?
<lazyPower> s/geo/region
<jseutter_> lazyPower: Ah! I just changed that to az1-region-b.geo-1 and removed hpcloud.jenv, but still get the same error
<lazyPower> Hmm, that was going to be my suggestion
<jseutter_> hm, looking closer, the error also mentions "access to these services is missing: object-store"
<jseutter_> I wonder if I need to enable swift in hpcloud somehow
<lazyPower> https://bugs.launchpad.net/juju-core/+bug/1237011, jseutter_ have you read this bug report?
<lazyPower> according to wallyworld, you dont need to enable those anymore. it shoudl work ootb
<jseutter_> lazyPower: hm.  Is that because of a change in hpcloud, or something in juju?
<lazyPower> I would assume juju
<lazyPower> but I'm not certain of that
<lazyPower> well wally does mention swift in a post, so you may still need swift for juju to function correctly, and that is the object store in openstack yes?
<lazyPower> it sounds plausible
<jseutter_> Found the problem.
<jseutter_> doh, hit the next one
<jseutter_> the issue was that in hpcloud, you have to hit the "activate" button on object storage to get access to swift
<jseutter_> now I get "cannot start bootstrap instance: index file has no data for cloud {az1-region-b.geo-1 https://region-a.geo-1.identity.hpcloudsvc.com:35357/v2.0/} not found"
<jseutter_> hitting that page in my browser gives an xml response that says "Not Allowed".
<lazyPower> jseutter_, just a suggestion, but check this AU post
<lazyPower> http://askubuntu.com/a/166081/6807
<lazyPower> did you follow those steps to get started?
<lazyPower> i'm not terribly familiar with HPCloud so this is a learning experience for the both of us
<jseutter_> lazyPower: heh.  I started there first, but that askubuntu post seems older than the directions on juju.ubuntu.com that I referenced earlier
<lazyPower> Ok, I'll make a note to get them updated. Thanks for that feedback
<jseutter_> like in that askubuntu post, hpcloud didn't have ubuntu images yet
<jseutter_> lazyPower: The juju.ubuntu.com page for hpcloud is fairly badly out of date as well.  I can file a bug if it would help
<jseutter_> it looks like hpcloud recently upgraded the version of openstack horizon they are based off of
<lazyPower> jseutter_, please do - https://bugs.launchpad.net/juju-core/docs
<jseutter_> lazyPower: yeah, it's my first time with hpcloud as well. I spent the day trying to get juju working with devstack, then gave up and tried hpcloud as a last resort
<lazyPower> I've got juju working on Amazon and LXC providers, that's right around the end of my knowledge of the providers
<lazyPower> I figure I'll be digging around in openstack very soon, but unfortunately I'm not much help to you right now
<lazyPower> aside from moral support and extra google'ing
<jseutter_> lazyPower: thanks, that helps.  I'm working on a charm that talks to nova, so I need some sort of working openstack install
<jseutter_> I might try rackspace next
<jseutter_> heh, apparently rackspace is a non-starter
<jseutter_> lazyPower: https://bugs.launchpad.net/juju-core/+bug/1274379 filed
<jseutter_> hm, wonder who else offers openstack that will work with juju...
<lazyPower> THe only listed providers are HP and Maas/openstack
<lazyPower> the manual provisioner isn't completed
<lazyPower> I'll reference this bug tomorrow in my standup and see if there is already WIP on the hpcloud docs being updated.
<lazyPower> but for now i'm going to EOD, its well past quitting time
<jseutter_> ah, thanks.
<jseutter_> same here
<lazyPower> if you need anything else, dont hesitate to ask jseutter_
<lazyPower> take care
<wallyworld_> jseutter_: lazyPower : sorry, i just noticed your posts. from memory, there is only images metadata for region a right now on hp cloud, so for region b you need to generate and use your own metadata
<lazyPower> wallyworld_, for clarity can you point me to information regarding that? I ran across a prior transcript from IRC without instructions.
<jseutter_> wallyworld_: ah, I'll try region a then
<wallyworld_> i just manually inspected the json metadata file to confirm it only had region a in it.
<wallyworld_> i'm not sure of the current state of the doco
<wallyworld_> i can ask to see if the folks doing the roll outs can include metadata for region b
<wallyworld_> there's a getting started guide somewhere, i'll see if i can find a link
<wallyworld_> the only doc i know of is https://juju.ubuntu.com/docs/config-hpcloud.html
<wallyworld_> i haven't read it so am not sure if it's up to date
<wallyworld_> it's a little dated
<wallyworld_> you no longer need public-bucket-url or control-bucket if running juju 1.16.5 or greater i think
<lazyPower> wallyworld_, ah yeah i've read through that. Sorry i thought there would be more information specific to generating the metadata
<wallyworld_> ah, i'm not sure if or where that's documented yet. what version of juju are you running
<lazyPower> 1.17.1
<arosales> jseutter_, are you running 1.17.x?
<wallyworld_> here's the current draft doco, there's a bug to turn it into user doc http://pastebin.ubuntu.com/6841941/
<jseutter_> wallyworld_: so I tried switching regions and the error is still the same. "ERROR cannot start bootstrap instance: index file has no data for cloud {az1-region-a.geo-1 https://region-a.geo-1.identity.hpcloudsvc.com:35357/v2.0/} not found"
<arosales> lazyPower, if you are running 1.17.x in hp cloud you need to add     tools-url: https://region-a.geo-1.objects.hpcloudsvc.com/v1/60502529753910/juju-dist/tools
<arosales>  
<arosales> to your hp stanza
<jseutter_> arosales: 1.16.4-saucy-amd64
<arosales> jseutter_, ah 1.16.4 shouldn't need to the tools url
<arosales> but lazyPower if you are running 1.17 on hpcloud you will need the tools url
<wallyworld_> lazyPower: also, 1.17.1 is supposed to automtically upload image metadata to the cloud storage if you generate it so the bit about uploading using swift can be ignored
<wallyworld_> arosales: we *really* need to get metadata/tools up on streams.canonical.com
<arosales> jseutter_, and your getting a 404 in region a?
<wallyworld_> arosales:  this was supposed to happen before 1.17 was released
<arosales> jseutter_, remember if you switch to region a to activate both compute and storage for region a
<jseutter_> arosales: checking..
<arosales> wallyworld_, simple streams has been fun this week
<wallyworld_> yeah, especially for maas :-(
<wallyworld_> we gave roger a workaround though
<wallyworld_> if streams.c.com was up, that wouldn't have been an issue for maas either
<arosales> wallyworld_, at least there is that
<wallyworld_> arosales: root cause there was maas can't return a url to a file unless it has already been uploaded
<wallyworld_> other providers don't have that issue
<wallyworld_> and we missed it cause we don't yet have CI testing for maas
<jseutter_> arosales: compute and object are activated for region a.  When I hit the url from the error in my browser, I get an xml message that says "Not Allowed". Not sure if that's a valid test or not.
<wallyworld_> that xml message is cause tools/image metadata cannot be found - it is a poor message but is fixed in 1.17.1
<wallyworld_> it would be goo to run bootstrap with --debug --show-log to see the output so we can tell where it's trying  to find tools
<arosales> wallyworld_, we need better ci with maas for sure
<wallyworld_> arosales: william and others are now going to push hard to get that happening
<arosales> +1
<arosales> jseutter_, and you pastebin juju --debug bootstrap
<wallyworld_> arosales: we also need, it seems, to get other regions besides region a supported on hp cloud
<arosales> wallyworld_, the cpc is going to be working on indexing hp in february
<wallyworld_> yay
<jseutter_> arosales: is there any secrets in the output?
<arosales> jseutter_, nope
<jseutter_> arosales: http://pastebin.ubuntu.com/6841977/
<arosales> jseutter_, do you have
<arosales> public-bucket-url: https://region-a.geo-1.objects.hpcloudsvc.com/v1/60502529753910
<arosales> for your hp stanza in your environments.yaml ?
<wallyworld_> arosales: public bucket not required
<wallyworld_> log shows: certified cloud tools-url set to https://region-a.geo-1.objects.hpcloudsvc.com:443/v1/60502529753910/juju-dist/tools
<arosales> jseutter_, also here is an example of my hp stanza that I had working for 1.16
<arosales> wallyworld_, even for 1.16?
<wallyworld_> yeah
<arosales> hmm
<wallyworld_> in 1.16, we made juju know about hp cloud
<wallyworld_> that line i just pasted above shows the tools url being set for hp cloud
<arosales> wallyworld_, good point
<wallyworld_> this line shows tools meta data being read
<arosales> wallyworld_, I had that from previous juju versions I guess
<wallyworld_>  finding products at path "streams/v1/com.ubuntu.juju:released:tools.json"
<wallyworld_> yeah
<wallyworld_> this line shows tools being found
<wallyworld_>  juju.environs.boostrap bootstrap.go:71 environs: picked newest version: 1.16.5
<jseutter_> arosales: I added public-bucket-url, http://pastebin.ubuntu.com/6841998/
<wallyworld_> last lines show that it's image metadata not found
<wallyworld_> on cloud-images.ubuntu.com
<wallyworld_> so tools are ok
<arosales> jseutter_, same error?
<wallyworld_> jseutter_: public bucket url not needed, finding the image metadata is the issue
<wallyworld_> let me check the image metadata
<jseutter_> arosales: yeah, seems exactly the same
<arosales> jseutter_, ah what is your default series?
<wallyworld_> arosales: jseutter_ the region is wrong
<arosales> jseutter_, sorry to have your try public-bucket wallyworld_ pointed out the tools are being found
<wallyworld_> you need to set the region to az-1.region-a.geo-1
<wallyworld_> not az1.region-a.geo-1
<arosales> wallyworld_, nice eye
<jseutter_> really?
 * jseutter_ smacks forehead
<wallyworld_> it took me squinting hard to see it :-)
<arosales> jseutter_, make sure you have "region: az-1.region-a.geo-1"
<wallyworld_> so in summary: juju "just works" for hp cloud and region a. not urls in config needed
<wallyworld_> other regions coming in feb hopefully :-)
<arosales> also of note we don't have saucy images in hp cloud yet . .  just precise.  However, it looks like you are using precise too based of your output
<arosales> wallyworld_, thanks
<wallyworld_> np
<jseutter_> arosales: I don't have a default-series set for my hpcloud config section
<arosales> jseutter_, no worries it should default to precise
<arosales> and your output looks like that is what it is using
<jseutter_> arosales: error seems unchanged - http://pastebin.ubuntu.com/6842022/
<arosales> jseutter_, try to rm ~/.juju/environments/hp.jenv
<arosales> given you call your environments.yaml "hp"
<jseutter_> arosales: I named it hpcloud, and I have been verifying that destroy-environment blows away the file in environments/
<jseutter_> just tried it again, no change
<jseutter_> hm
<wallyworld_> jseutter_: it's az-1.region.....
<wallyworld_> not az-1-region.....
<wallyworld_> that info is not Juju btw, that comes from hp cloud
<wallyworld_> juju needs to match with how hp cloud exports its region names
<jseutter_> fsck, sorry about that
<wallyworld_> np :)
<wallyworld_> easy to do a typo for sure, i've made lots of similar mistakes myself
<wallyworld_> i also say a lot worse than fsck :-)
<jseutter_> finally, a different error: http://pastebin.ubuntu.com/6842041/ :)
<jseutter_> hehe, trying to keep it pc.  Today I can verify that my keyboard is resistant to beatings
<arosales> jseutter_, lol
<wallyworld_> jseutter_: i've not seen that one. juju got told to ask hp cloud to start an instance with image id 81078
 * arosales taking a look at your recent debug log
<wallyworld_> hp cloud complained
<wallyworld_> perhaps the images ids we are using are out of date?
<wallyworld_> i think there's a nova command to list the current ids
<wallyworld_> nova image-list
<jseutter_> hm, is there some way to get a novarc out of hpcloud?
<arosales> wallyworld_, but that would be metadata thought right?
<arosales> s/thought/though/
<wallyworld_> arosales: yeah, i've just done a listing and there's no such image id anymore
<wallyworld_> arosales: so our images metadata appears stake
<wallyworld_> stale
 * arosales tries a bootstrap on hpcloud
<wallyworld_> jseutter_: i just set up a novarc file with exported values for region, username, password etc
<wallyworld_> so you could do it my hand if you have all the values, which you do cause i'm guessing they are in your env.yaml
<jseutter_> wallyworld_: I'll give it a shot
<arosales> ugh I am on 1.17
<arosales> jseutter_, https://docs.hpcloud.com/cli/nova
<arosales> for the nova set up
<wallyworld_> jseutter_: here's mine http://pastebin.ubuntu.com/6842083/
<arosales> wallyworld_, so is image stream data different for 1.16 and 1.17
<wallyworld_> arosales: no, should be the same. i'd say hp added new images and deleted old ones and we haven't kept our metadata current
<arosales> wallyworld_, so I get a successful bootstrap on hpcloud with 1.17
<wallyworld_> hmmm. do you know what image id was used?
<wallyworld_> what region?
<wallyworld_> az-1, 2 or 3?
<wallyworld_> arosales: the 3 image ids in our metadata are 49850, 68481, and 81078
<arosales> az1
<wallyworld_> for regions 3, 2, 1
<arosales> wallyworld_, http://pastebin.ubuntu.com/6842103/
<wallyworld_> arosales: can you look at the hp console and see what image id was chosen?
<arosales> sure one sec
<arosales> wallyworld_, 81078
<wallyworld_> arosales: weird. that's the same image id as jseutter_ got but his request to hp cloud was rejected
<wallyworld_> maybe it was a transient error
<arosales> the ids change per region
<jseutter_> hm, is there something else I need to click in the gui?
<wallyworld_> i think it was also region az-1
<arosales> jseutter_, are you in US West AZ 1
<arosales> jseutter_, are you in the classic or new console?
<jseutter_> arosales: I have "region: az-1.region-a.geo-1" in my config section
<jseutter_> arosales: new console
<jseutter_> I just signed up with a trial today, I didn't get a choice to use the old console
 * arosales traversing their new console
<arosales> hmm the new  console isn't showing me us west az1 compute image ids
<arosales> jseutter_, I _think_ you are operating in a new region
<jseutter_> arosales: hm, that would make sense.  Is there any way I can tell?  When I launch an instance in the gui I still get az1 as a choice of where my instance will run
<arosales> jseutter_, if you go here
<arosales> http://www.hpcloud.com/console
<arosales> do you see an option for HP Classic Cloud Management Console
<jseutter_> yes.  I haven't used it yet
<arosales> jseutter_, its different services from hp
<arosales> pre and post 13.5
<jseutter_> arosales: when I click the Activate button in the classic console, I get automagically booted to the new console
<arosales> when I logged into the new console us west was not activated for me, even though I knew it was
<arosales> jseutter_, interesting
<arosales> jseutter_, on https://console.hpcloud.com/dashboard
<arosales> do you see a button for us west az1
<arosales> under "infrastructure services"
<jseutter_> arosales: no.  When I click anything on that page I get sent to horizon.hpcloud.com
<arosales> hmm ok
<arosales> jseutter_, out of curiosity does https://console.hpcloud.com/compute/az-1_region-a_geo-1/servers load for you
<arosales> jseutter_, looking at the two consoles I don't see an Ubuntu 12.04 image under partner provided  images in the new console
<arosales> however I do see this image in the old console under partner provided images
<jseutter_> arosales: no.  I get redirected to https://console.hpcloud.com/dashboard and there is a red error message in the top right that says "The requested service does not exist in the project"
<arosales> and that is image 81078 which juju is trying to bootstrap
<arosales> ok, this looks like an issue with hpcloud that they are not bringing forward partner provided images
<jseutter_> arosales: do you see this image? https://horizon.hpcloud.com/project/images_and_snapshots/8c096c29-a666-4b82-99c4-c77dc70cfb40/
<arosales> specifically they did not bring Ubuntu 12.04 (81078) that Canonical provided and that juju is trying to bootstrap
<arosales> Warning: You are not authorized to access /project/images_and_snapshots/8c096c29-a666-4b82-99c4-c77dc70cfb40/
<jseutter_> That for me is an image "Ubuntu Precise 12.04 LTS Server 64-bit 20121026 (b)"
<jseutter_> ah, I see that some of the images in the list say "Partner Provided", but the ubuntu ones don't
<jseutter_> err, "Partner Image:
<arosales> jseutter_, basically there should be a 12.04 image under partner provided
<jseutter_> nuts.
<arosales> jseutter_, since I have an account before I can still access that image in the pre 13.5 hp services
<arosales> jseutter_, I am going to open a ticket with hp cloud
<jseutter_> and probably why no one saw it before now.
<arosales> jseutter_, sorry won't help you right now but hopefully we should see a resolution soon
<arosales> jseutter_, it may be that your account is new
<jseutter_> arosales: yeah, I just signed up a few hours ago
<jseutter_> arosales: thanks for all your help, I owe you and wallyworld_ a beer when I see you guys
<arosales> jseutter_, nah thanks for the feedback. Its appreciated.
<wallyworld_> hey, my pleasure. pleased to help
<wallyworld_> sorry you had all the issues
<jseutter_> heh, we just added more work to the pile today :)
<arosales> jseutter_, I'll give you a ping when I find a conclusion here and also update our juju docs
<jseutter_> arosales: cool, thanks
 * arosales filing a hp support ticket
<arosales> jseutter_, there is always canonistack
<jseutter_> arosales: I've been fighting with it all day, haven't been able to successfully bootstrap _once_ today in either region.
<jseutter_> I tried devstack for a bit but gave up when I found an email saying it was a bad idea to try it with juju
<arosales> jseutter_, geesh you just can't catch a break
<jseutter_> heh, it sorta feels that way.  Thanks again
 * arosales confirmed that there is a bug on HPs end. Specifically HP had not moved 12.12 partner images over to 13.5
<arosales> so new users can't see 12.12 services onlly 13.5
<arosales> thus bootstraping fails as the partner images are not in 13.5 :-(
<arosales> hopefully there will be a resolution soon
<arosales> wallyworld_, ^ fyi
<wallyworld_> ah ok, thaks
<ianous> Does anyone know if juju sends auth-token to retrieve files from buckets?
<tomixxx> hi, it seems i have some DNS resolve problems: when i enter ssh cloud1.master it says: "ssh: Could not resolve hostname cloud1.master: Name or service not known"
<lazyPower> tomixxx, that's not a valid FQDN. is cloud1.master the name of a service you've deployed with juju?
<tomixxx> no, its the fqdn
<tomixxx> ah sorry
<tomixxx> de fqdn is "cloud1"
<tomixxx> .master is added by MAAS because i have enabled "DHCP and DNS"
<lazyPower> Whats the IP of cloud1? Can you try ssh'ing to the IP instead of the assigned hostname?
<tomixxx> ip is 10.0.0.100
<tomixxx> and yes, i can ssh with that
<tomixxx> but the problem is that juju tries to enter with "cloud1" and not with the IP so juju stucks when i call for example "juju status"
<tomixxx> hmm it seems its sufficient to add the nameserver to the interface, namely "dns-nameservers 10.0.0.9"
<marcoceppi> tomixxx: you need to set your resolv.conf on your maas master to point to your DNS server
<marcoceppi> tomixxx: while this won't survive reboots, edit /etc/resolv.conf at the top add nameserver 10.0.0.1 save it then try to ssh to cloud1.master
<tomixxx> hi marcoceppi: i have modified etc/interfaces and added a line "dns-nameserver " and now resolv.conf on restart is alrady set
<tomixxx> i cann ssh now cloud1.master
<marcoceppi> tomixxx: okay, cool. You'll need to edit you /etc/resolveconf/resolv.conf.d/head to have the nameserver 10.0.0.1 and on another line add search master
<marcoceppi> then run `sudo resolveconf -u` and then all you should need to do is type ssh cloud1 (no need for .master anymore) and get to that box
<marcoceppi> from there juju status, juju ssh, etc should work without issue and you can carry on deployting
<tomixxx> i want to re-install juju complete because the dns problem was already there when i added the nodes to maas (when they failed to download some packages)
<tomixxx> the problem is, i cannot delete cloud1.master, because the node is "allocated to root" because it is needed as juju bootstrap node.
<marcoceppi> tomixxx: you should be able to run `juju destroy-environment maas`
<marcoceppi> which should un-allocate it from root and return it to the pool of available machines
<tomixxx> i have executed sudo apt-get purge juju before, so i cannot do it anymore :D
<tomixxx> ok, i will reinstall juju package in order to call destroy-environemnt
<tomixxx> damn does not work
<tomixxx> it says "error: unrecognized args: ["maas"]"
<tomixxx> kk, calling "juju destroy-environment" is sufficient
<tomixxx> ah help... the nodes cannot download packages again...
<tomixxx> can i have multiple "search" statements in resolv.conf?
<tomixxx> ok, now ubuntu+juju is installed again on my first cloud node, but i stuck agian when i call "juju status"
<tomixxx> is this ok, will juju status only work if installation of cloud node is finished?
<tomixxx> at the moment, "ssh ubuntu@cloud1" does not work either
<marcoceppi> tomixxx: yes, you need to wait for cloud1 to install
<tomixxx> OK
<tomixxx> ok, seems to work
<tomixxx> juju status works
<marcoceppi> tomixxx: then you're all set
<tomixxx> we will see
<tomixxx> however, i can deploy already services like mysql or rabbitmq-server :-)
<tomixxx> how can i check which services are already startet with juju?
<lazyPower> tomixxx, juju status has a data point: agent-state
<tomixxx> ahh i see
<lazyPower> if agent-state = started, your charm has run its install hooks, and is in a ready state according to the orchestrator
<tomixxx> a question: when i run juju-add-relation nova-compute rabbitmq-server
<tomixxx> i got "ERROR ambiguous relation: "nova-compute rabbitmq-serveR" could refer to "novacompute:amqP rabbitmpq-server:ampq"; "nove-compute:nrpe-external-master rabbitmp-server:nrpe-external-master"
<tomixxx> is that normal?
<tomixxx> and some juju-services have angent-state: pending...
<tomixxx> on which machine are these service installed?
<tomixxx> i dont understand: i have only 2 nodes, and they are listet in "juju status" but also a few other nodes up to 8 nodes which are not listed...
<tomixxx> Can someone look at this, please: http://pastebin.ubuntu.com/6844443
<tomixxx> why are there listed multiple nodes until node "8", when i only have two nodes?
<lazyPower> tomixxx, that is normal, there are two interfaces define
<lazyPower> *defined. And you need to be implicit in which relationship are you building
<lazyPower> marcoceppi, tomixxx has some machine spin up issues it looks like. Any input on whats up with gomaasapi?
<tomixxx> what do u mean with "two interfaces" ?
<tomixxx> :( do i have destroyed sth?
<tomixxx> the install guide says: 1.It's assumed the user has sufficient number of nodes available in MAAS. If you don't intend to use the helper application, juju-jitsu and its deploy-to option allowing you to co-locate services, then the minimum deployment outlined below requires a minimum of 10 machines including the juju bootstrap node and MAAS server.
<tomixxx> Do i need really 10 PHYSICAL machines?
<tomixxx>  i have only 2
<tomixxx> 3 together with maas
<tomixxx> i want to start all service on one physical machine i guess
<tomixxx> lazyPower: can i simply ignore the nodes 2 to 8 ?
<tomixxx> lazyPower: Do you mean they come from the university network?
<lazyPower> tomixxx, 1 moment
<tomixxx> kk
<lazyPower> tomixxx, ah i'm not sure about the requirement of 10 physical machines.
<lazyPower> My exposure to maas has been very limited
<marcoceppi> tomixxx: what are you trying to do at the end of the day?
<marcoceppi> you do not need juju-jitsu anymore, --to support is in juju now
<tomixxx> marcoceppi: at the end of the day, i want to parallelize a Natural-Language-processing task with cloud computing
<tomixxx> so, my vision is that i can use openstack which will distribute to work to the nodes available in the cloud, roughly spoken
<marcoceppi> tomixxx: so, if you want to deploy openstack, you can do a lot of it in containers, which will allow you to run more density on fewer nodes
<marcoceppi> I think the least I've seen is something like 4 nodes for openstack. Then each additional node is just used for nova-computer
<tomixxx> additionally, the performance of this private cloud is going to be evaluated against the performance of public cloud solutions
<tomixxx> my question is now: can i deploy all service listed here: https://help.ubuntu.com/community/UbuntuCloudInfrastructure on one single machine?
<tomixxx> and the second quesiton is: can i ignore the listed nodes 2 to 8 in the juju status output
<marcoceppi> tomixxx: what do you mean the listed nodes 2 to 8?
<tomixxx> http://pastebin.ubuntu.com/6844443 here u can see the output from "juju status"
<tomixxx> i have only 2 physical nodes, but other nodes are listet there, however, indicating, they are not really available
<marcoceppi> tomixxx: no, those mean that juju can't allocate those nodes
<marcoceppi> tomixxx: you only have two MAAS nodes?
<tomixxx> yes
<marcoceppi> tomixxx: yeah, so that's why. Default juju story is to put one machine on one node
<marcoceppi> tomixxx: it's hard to demo a cloud infrastructure you dont' have a lot of cloud space
<tomixxx> ;-)
<marcoceppi> tomixxx: You can put almost all the services, execept nova-compute, on LXC containers on one node
<marcoceppi> tomixxx: if you don't use swift, or any object store, you /could/ use just two machines
<marcoceppi> tomixxx: so what you'll want to do, is juju destroy-environment
<marcoceppi> this will re-allocate the nodes back to maas
<tomixxx> k
<tomixxx> done
<marcoceppi> tomixxx: give me about 5 mins
<tomixxx> k, the whole task looks as follow: a user uploads a number of plank text documents to the cloud, the cloud distributes the documents to the nodes (i.e. with map reduce or similar techniques), each node make the languaga analysis and store the results in a distributed file system and finally, the user is able to download the results from the cloud.
<tomixxx> so, roughly spoken, i need to get things work simply because i need to write my master thesis and maybe i will use this technology later on in my company as well.
<tomixxx> so, i do this not "for fun"
<tomixxx> marcoceppi: are u still here?
<marcoceppi> tomixxx: yes, sorry
<tomixxx> np
<marcoceppi> tomixxx: you can use juju deploy --to lxc:0 for almost all the services except neutron (so don't use neutron for networking) and nova-compute
<marcoceppi> when you get to deploying nova-compute, just run juju deploy nova-compute to give it it's own machine
<tomixxx> kk, why do nova-compute need its own node?
<tomixxx> (just for the sake of interest)
<marcoceppi> now this may not work for openstack, but almost all the services can be put in LXC containers on one node except nova-compute which needs it's own node
<marcoceppi> tomixxx: because that's what actually spins up VMS
<tomixxx> k
<marcoceppi> tomixxx: so when you want more compacity, you add another machine in maas, then you can juju add-unit nova-compute and you have more resources to run VMs in openstack
<tomixxx> k
<marcoceppi> you also can't use ceph, swift, or any of the osd stuff because of the lxc containers, you need physical machines for that
<marcoceppi> but if you don' need an object store, just the VM capacity, that should do
<tomixxx> what do u mean with "object store"
<tomixxx> marcoceppi: do i need to bootstrap juju again?
<marcoceppi> tomixxx: yes, after destroy-environment you'll need to bootstrap
<tomixxx> ok, i did "juju bootstrap --upload-tools" and it says " ERROR TLS handshake failed: x509: certificate sigend by unknown authority"
<marcoceppi> tomixxx: that's interesting. run `rm -f ~/.juju/environments/*.jenv` then try again
<tomixxx> and do i need to reboot the node i guess?
<tomixxx> on which juju is going to be deployed
<tomixxx> because "juju status" stucks now again
<tomixxx> because i have no wake-on-lan ;)
<tomixxx> marcoceppi: however, i guess i know how to continue now and I thank you for your help so far ;)
<marcoceppi> tomixxx: yes, you'll need to reboot it
<tomixxx> kk
<mgz> what on on unholy venv have you done to the amulet test setup marcoceppi...
<marcoceppi> mgz: not me
<marcoceppi> blame bcsaller and rick_h_ ;)
<rick_h_> :)
<mgz> it doesn't even work, skips the site-packages but doesn't include python3-yaml in the test deps
<marcoceppi> mgz: there's a lock step thing, or something, you have to do
<marcoceppi> like make build; make venv; make test
<marcoceppi> I think
 * marcoceppi looks
<rick_h_> make sysdeps && make && make test
<rick_h_> I think
<rick_h_> the whole thing stinks, but some issue in trust vs older and the path the py3 venv puts things (bin vs local/bin)
<rick_h_> trusty that is
 * marcoceppi confirms
<mgz> okay, now it ImportErrors on "from . import helpers" in amulet.deployer
<marcoceppi> mgz: if I run make sysdep && make && make test
<rick_h_> that I know nothing about
<marcoceppi> the testing suite runs
 * rick_h_ goes to check it out
<mgz> looks like it lacks charmworldlib
<marcoceppi> and succeds
<marcoceppi> mgz: is it because something is broken in helpers?
<marcoceppi> did you update it at all?
<mgz> no, just charmworldlib not existing
 * marcoceppi scratches head, sounds like venv isn' working or something
<mgz> there's an egg in the venv...
<mgz> er... I need to remember how to do some of this
<rick_h_> mgz: activate the venv?
<rick_h_> mgz: is this running the tests? they should auto be in the venv paths
<mgz> rick_h_: trying to diagnose why the tests fail
<mgz> import errors are tricky, particulary when I can't trivially reproduce the environment the import is happening in
<rick_h_> mgz: k, building here with the latest trunk to see if I can dupe
<rick_h_> mgz: the import is from the test then?
<rick_h_> marcoceppi: we should update to use the package name for imports vs the . as well imo.
<mgz> yeah, just need to get my test stuff working, venv/bin/python3 helps
<marcoceppi> rick_h_: ack
<mgz> rick_h_: generally an import failing like that means some *other* import failed, I have it now
<rick_h_> mgz: k, tests pass trunk here. not seeing the error. It's all virtualenv'd so if you run things manually you need to either activate it or work through venv/bin to do things
<mgz> accidentally wrote a line of python in go :)
<rick_h_> lol, did it get cranky?
<marcoceppi> ;)
<rick_h_> mgz: suggestions for improvement welceom. The goal is to get tox and multiple python versions going which is part of the reason for the process.
<marcoceppi> hazmat: you around
<marcoceppi> ?
<hazmat> marcoceppi, yes
<marcoceppi> hazmat: sorry, false alarm, was maas tom foolery
<hazmat> rick_h_, is anyone working on making charmworld ui / manage.jujucharms.com look decent/functional?
<rick_h_> hazmat: not atm.
<rick_h_> hazmat: it's pending some project talk next week and not currently on any project plan I know of atm.
<hazmat> rick_h_, ic, thanks
<rick_h_> hazmat: but jcastro has a giant cheering flag he waves around the topic :)
<jcastro> I have an item on the spreadsheet for the sprint
<hazmat> rick_h_, its painful.. to use.. but its the only thing to use.. i get tempted to resurrect the old code base on a machine just to have a compact usable ui.
<jcastro> hazmat, let's determine that next week
<jcastro> I hear ya
<jcastro> but like, we have an existing mess and adding another UI would make it worse imo
<rick_h_> hazmat: write a cli client using the api for us :) /me has pondered that a couple of times
<hazmat> rick_h_, well.. part of the issue is that api is in the wrong place.. i'm not sure promoting it more is the right answer.. and really a cli isn't going to help for information display.
<rick_h_> hazmat: +1 honestly I'm going to be all on board with the planned session of splitting the store out of the juju base and building it up.
<avoine> someone have tried to send json with the relation-set command?
<marcoceppi> avoine: the relation-set command can take json with --format json
<avoine> cool, I'll use that
<jamespage> marcoceppi, https://launchpad.net/~mysql-ubuntu/+archive/mysql-5.6/+packages
<jamespage> first cut of mysql-5.6 package for 14.04
<marcoceppi> mgz: thank you for your pr, we found the issue with bzr. Apparently it really want's to know whoami
<marcoceppi> jamespage: thank you sir!
<jamespage> marcoceppi, np
#juju 2014-01-31
<aquarius> marcoceppi, ping about discourse and upgrades. I've installed discourse with juju, and discourse says I should upgrade. You said that the juju discourse charm doesn't handle upgrades. What should I do?
<aquarius> (am afk, but happy to read responses and deal with them later)
<thumper> o/ aquarius
<thumper> aquarius: happy birthday for the other day
<thumper> I was too slow
<aquarius> hey thumper!
<aquarius> you're only 49 minutes too late :)
<thumper> sorry, was at the gym
<thumper> and you were probably at the pub
<sarnold> happy regular day aquarius!
 * aquarius laughs
<aquarius> I was, in fact, in the pub.
<aquarius> Currently home.
<aquarius> post-pub :)
<aquarius> happy 31st January to me,
<aquarius> Only 364 days until my birthday!
<sarnold> YAY
<marcoceppi> aquarius: so, it does actually do upgrades, if you just increment the version number in release, or branch, or tag, or whatever config option I called it
<marcoceppi> but you don't want to run juju upgrade-charm
<aquarius> marcoceppi, oh! so, how do I upgrade discourse then?
<marcoceppi> aquarius: there's a release config options, or something
<aquarius> marcoceppi, I have no idea what that means :)
<marcoceppi> aquarius: one min, otp
<aquarius> marcoceppi, no worries :)
<marcoceppi> aquarius: okay, so there's a release configuration option, https://bazaar.launchpad.net/~marcoceppi/charms/precise/discourse/trunk/view/head:/README.md#L69 what is your release configuration currently set to?
<marcoceppi> juju get discourse should illuminate that
<marcoceppi> albiet in a very verbose and slightly annoying fashion
<aquarius>     value: latest-release
<aquarius> if I've correctly understood the output
<marcoceppi> aquarius: right, so you probably didn't follow the readme, which is cool, I forgive you
<aquarius> heh. I was here telling you everything I did when I installed it :)
<aquarius> forgiveness appreciated. How can I redeem myself?
<marcoceppi> aquarius: so you can find all the valid options for this configuration here: https://github.com/discourse/discourse/releases
<marcoceppi> aquarius: basically, you should run `juju set discourse release="v0.9.8.3"` then wait about 5-10 mins
<marcoceppi> and discourse should upgrade
<aquarius> marcoceppi, wow! that's it?
<marcoceppi> aquarius: pretty much
<aquarius> marcoceppi, and that will work from whichever version I'm on now?
<marcoceppi> aquarius: yes, as long as you always go forward, they follow a very rigorous upgrade path using db:migrations, etc
<aquarius> sure, but I don't have to upgrade to all the interim versions? nice.
<aquarius> let me just check what we're on now :)
<aquarius> heh, we are a bit behind :)
<aquarius> marcoceppi, what happens to the forum while the upgrade is happening? does it stay live?
 * aquarius runs juju set discourse release="v0.9.8.3"
<marcoceppi> aquarius: so...how's it going?
<aquarius> marcoceppi, it hasn't upgraded, yet. How can I know what it's doing?
<aquarius> juju get discourse shows that release is set to v0.9.8.3
<aquarius> but the forum itself is not upgraded
<aquarius> I don't know whether that's because my machine is in the middle of doing the upgrade, or whether the upgrade failed, or whether I have to do something else to tell it to upgrade?
<marcoceppi> aquarius: you can run juju ssh discourse/0
<marcoceppi> then tail -f /var/log/juju/unit-discourse-0.log
<marcoceppi> it can take quite awhile to run, depending
<aquarius> aha.
<aquarius> 2014-01-31 09:31:37 ERROR juju runner.go:211 worker: exited "uniter": ModeConfigChanged: cannot obtain storage account keys: GET request failed: ForbiddenError - The server failed to authenticate the request. Verify that the certificate is valid and is associated with this subscription. (http code 403: Forbidden)
<aquarius> that looks relevant
<aquarius> it's saying that over and over again
<aquarius> marcoceppi, I don't understand what that means. Don't even know what it's trying to access and failing :)
<marcoceppi> aquarius: that's a new one on me, any idea fwereade?
<fwereade> marcoceppi, sorry, I was disconnected, what's the problem?
<marcoceppi> 2014-01-31 09:31:37 ERROR juju runner.go:211 worker: exited "uniter": ModeConfigChanged: cannot obtain storage account keys: GET request failed: ForbiddenError - The server failed to authenticate the request. Verify that the certificate is valid and is associated with this subscription. (http code 403: Forbidden)
<marcoceppi> that looks relevant
<aquarius> fwereade, that's after I set a new release version for discourse (and so it presumably started to upgrade to the new version)
<fwereade> aquarius, crikey, that's new to me
<fwereade> aquarius, is it consistent, or does a `juju resolved --retry` help?
<aquarius> fwereade, at the moment, the job seems to be restarting every 3 seconds on my cloud VM, and then it has a bunch of log entries ending with that one.
<aquarius> I can try juju resolved --retry, but do I have to somehow "stop" what it's already doing?
<fwereade> aquarius, got you, sorry, I'm still a bit morningy
<aquarius> I can post the whole log section if that would help?
<fwereade> aquarius, that would definitely be interesting
<fwereade> aquarius, thanks
<aquarius> fwereade, http://pastebin.ubuntu.com/6848481/ is the snippet that comes up over and over
<fwereade> aquarius, what's the agent-state for that unit in `juju status`?
<aquarius>         agent-state: started
<aquarius> fwereade, and the postgresql unit is the same
<aquarius> yeah, still doing it. Should I stop juju from trying to do this upgrade somehow?
<aquarius> I'm now worried that it's going to eat all my compute time
<aquarius> another question: is my laptop participating in this process? If I restart my laptop, will it screw up anything that juju is doing?
<aquarius> marcoceppi, if I unset the release config variable, will it stop trying to do the upgrade and failing?
<marcoceppi> aquarius: restarting does nothing, since it's the remote machine that's working (your laptop)
<aquarius> right. So I can at least reboot my laptop, which update manager is telling me I need to :)
<marcoceppi> aquarius: re-setting the option might help, but it'll just fire a hook again
<marcoceppi> aquarius: what version of juju are you on?
<aquarius> marcoceppi, 1.16.5-saucy-amd64
<aquarius> according to juju --version
<marcoceppi> k
<aquarius> I don't really want the machine to be busy all the time, because it'll use up compute seconds, and that costs money ;)
<marcoceppi> aquarius: you can try a slightly bad thing, and stop the agents
<marcoceppi> then start them again
<aquarius> marcoceppi, tha sounds OK. How do I do that?
<marcoceppi> aquarius: on the node run sudo initctl list | grep juju-
<marcoceppi> then restart the two jobs listed there
<aquarius> weird. it doesn't show anything
<marcoceppi> should be something like jujud-machine-# and juju-discourse-unit-0
<marcoceppi> someting like that
<marcoceppi> aquarius: jujud-*
<aquarius> aha, jujud :)
<aquarius> do I need to stop them in a particular order?
<aquarius> should I stop each, then start each, or can I restart one then restart the other?
<aquarius> marcoceppi, OK, restarted them. logfile still shows that it's continually trying to do whatever it's trying and then fails with the "ForbiddenError" every 3 seconds
<marcoceppi> aquarius: you can increase your verbosity of logging
<marcoceppi> which might shed some more light on this
<aquarius> marcoceppi, oh, that sounds handy. How do I do that?
 * aquarius is rtfs to wok out what juju's trying to fetch and failing
<marcoceppi> juju set-env 'logging-config=<root>=DEBUG'
<marcoceppi> that should be sufficent
<aquarius> marcoceppi, ok, have run that
<aquarius> marcoceppi, should I expect that the log files will be more detailed?
<marcoceppi> aquarius: theoretically, yes
<aquarius> they aren't, so far
<aquarius> maybe it hasn't applied it ye?
<marcoceppi> aquarius: I wonder if your node is having problems talking to the bootstrap node
<marcoceppi> which is why youre' getting all this. Can you pastebin juju status?
<aquarius> http://paste.ubuntu.com/6848764/
<marcoceppi> huh, all your units are registered as started
<marcoceppi> I wonder if this is because you're running 1.16.5 and you have 1.16.3 deployed
<marcoceppi> there may be some incompats, though there shouldn't
<aquarius> perhaps?
<aquarius> I mean, that's a minor version upgrade only :)
<aquarius> just trying "juju unset discourse release" to see if that stops the server continually trying to do a non-working thing
<aquarius> bah, it doesn't.
<aquarius> server still trying to config things and failing, every 3 seconds :(
<aquarius> that error is being thrown by something trying to connect to Azure's storage account
<aquarius> so I think this is some sort of config error
<aquarius> specifically, the line throwing the error is http://bazaar.launchpad.net/~go-bot/juju-core/trunk/view/head:/provider/azure/environ.go#L113
<aquarius> 	keys, err := azure.GetStorageAccountKeys(accountName)
<aquarius> but I can't see where GetStorageAccountKeys is defined!
<aquarius> the only place that function is mentioned in the juju-core source tree is that line which calls it
<aquarius> I *suspect* that the problem here is that I changed my Azure subscription.
<aquarius> aaah, hang on, there is a "storage-account-name" in ~/.juju/environments.yaml. I wonder if that's wrong?
<aquarius> No, that's correct.
<aquarius> However, if I deployed the discourse setup with a storage name in the azure environments.yaml, and then I *changed* environments.yaml, what do I need to do to teach the juju deployment about the new value?
<aquarius> I am wondering whether it was set up talking to the old storage and now that it's changed, juju has cached the keys somewhere so it doesn't know how to talk to the new storage
<aquarius> I'm not even sure that the storage ID changed, mind
<aquarius> but I bet juju not being able to obtain the storage keys is something to do with me changing the ownership of that storage block from one MSDN subscription to another.
<aquarius> I do not have any idea how to see which keys juju is trying to use to talk to azure, and what the keys should be, and how to correct them. Is that even doable?
<aquarius> is this more an fwereade question? :)
<marcoceppi> aquarius: that's actually very probable
<marcoceppi> aquarius: they're all in the mongodb database somewhere, and fwereade or mgz might know more
<marcoceppi> aquarius: tearing down and re-standing up would be easiest but iirc this is a production site
<aquarius> it is.
<aquarius> it is community.badvoltage.org
<aquarius> active forum.
<aquarius> I do not want to destroy it and set it up again :)
<marcoceppi> aquarius: understood
<mgz> aquarius: which keys specifically? some details are in your ~/.juju/envrionments/{ENV}.jenv file
<marcoceppi> aquarius: fwiw, and an fyi, postgresql does nightly dumps to ~postgres/backups and keeps the last five days
<marcoceppi> if you're not already mirroring them offsite
<aquarius> mgz, I don't know. I am speculating what the problem is. Can you see the scrollback? I can fill in more detail if you canno
<aquarius> marcoceppi, I have backups, but they're for disaster recovery, not so I can blow away the village and start again unless I really really have to :)
<mgz> okay, reading
<marcoceppi> aquarius: yes, right
<marcoceppi> mgz: so in short aquarius is getting "2014-01-31 09:31:37 ERROR juju runner.go:211 worker: exited "uniter": ModeConfigChanged: cannot obtain storage account keys: GET request failed: ForbiddenError - The server failed to authenticate the request. Verify that the certificate is valid and is associated with this subscription. (http code 403: Forbidden)"
<aquarius> mgz, brief summary: using marcoceppi's discourse charm, deployed to Azure. Deployment went fine; site has been live for about two months. Today I tried to upgrade discourse. juju log on the discourse unit throws the error from above.
<marcoceppi> but he recently changed the bucket and possibly some additional details for the env
<aquarius> mgz, after the discourse deployment was created, I changed my Azure subscription (from the free trial to an MSDN Ultimate subscription) and had MS Support migrate all my VMs/storage etc over to the new subscription. I then edited environments.yml and changed the subscription-id and so on to the new values. I can successfully connect to the juju units with juju ssh and so on, so I must have done it roughly correctly.
<aquarius> mgz, I am *speculating* that that error is being caused because juju first connected to the Azure storage unit when it was under the free trial subscription ID, and cached some keys or something; now that storage unit (which may have a new ID and may not; I do not think it has) is under the MSDN subscription ID so juju's cached keys don't work.
<aquarius> But that is speculation.
<mgz> aquarius: can you compare the value sin your environments.yaml with the ones in the .jenv?
<aquarius> mgz, management-subscription-id and storage-account-name are the same in ~/.juju/environments.yaml and ~/.juju/environments/azure.jenv
<mgz> can you see the cert azure has under their web ui?
<aquarius> mgz, I don't know. What am I looking for?
<mgz> I'm not sure :) the x509 cert thing the azure setup instructions talk about being uploaded
<aquarius> mgz, there is a "manage access keys" thing for the storage
<aquarius> mgz, I can regenerate those keys, and then teach juju about the new keys
<aquarius> but I don't know where juju stores the keys!
<aquarius> let me look in .jenv
<aquarius> mgz, I am 95% sure that the .cer file that I uploaded on setup is correct, because it's still there, it didn't change, and if I had that wrong then I don't think that juju would be able to talk to azure *at all*.
<aquarius> I think the problem is juju isn't able to talk to the *storage*
<aquarius> but then maybe the problem is that juju is trying to get the storage access keys and isn't allowed because something else is wrong
<aquarius> mgz, I can't see the actual cert in the Azure web UI. I can see that there *is* a cert, that it's called "Juju" (which is what I named it), and I can see its fingerprint.
<aquarius> (er, thumbprint)
<aquarius> I don't know how to get the thumbprint of whichever cert juju is using though!
<aquarius> the storage access keys are described at https://www.windowsazure.com/en-us/documentation/articles/storage-manage-storage-account/#how-to-view-copy-and-regenerate-storage-access-keys
<aquarius> can I see which ones juju is using and update them?
<mgz> I'd expect either the path or the cert itself to be in the jenv
<aquarius> (although I think the error I'm getting may be from juju trying to get those keys and being denied, rather than *using* those keys and being denied)
<mgz> right, it doesn't get as far as getting the storage keys
<aquarius> mgz, so, your theory is that when juju tries to get the storage keys, it authenticates that request with the cert, and the cert is wrong? that sounds plausible, but I don't know how to confirm that :)
<mgz> aquarius: something like that, though the request goes through as far as getting an http error, which implies a connection of some kind worked
<aquarius> *nod* that http request is getting a Forbidden
<aquarius> I am trying to work out how to calculate the thumbprint of the certificate I have :)
<mgz> I'd like to ask one of the red squad who wrote the gwacl bits
<mgz> rvba: ^ any ideas on this azure error?
<aquarius> OK. I have confirmed that the thumbprint of the cert that Azure has, and the thumbprint of the cert I have set in environments.yaml, are the same.
<aquarius> the azure thumbprint is listed in azure web ui > settings > management certificates, and I got the fingerprint with cat (cert file from ~/.juju/environments.yaml) | openssl x509 -sha1 -fingerprint
<mgz> okay, fun
<aquarius> hm! question. If I look at the cert file itself, I can see the certificate value after --- BEGIN CERTIFICATE ---
<aquarius> in .juju/environments/azure.jenv, I can see that same certificate under management-certificate
<aquarius> however there is also, in environments/azure.jenv, a ca-cert value which seems to be a different cert. Are we expecting those two to be different?
<aquarius> am happy to provide debugging information for rvba if it'd help
<allenap> mgz: I donât know where to start. Can you summarise the question?
<mgz> aquarius: that one is a differnt one, used for juju itself
<aquarius> mgz, ok, that's not a problem then :)
<mgz> allenap: see summary at 11:23 in scrollback
<mgz> allenap: any ideas?
<allenap> mgz: Iâm also thinking along the x509 road, but Iâve just spent the last 5 minutes trying to log into Azure.
<ashipika> a quick question.. what does "ERROR no such request "AddCharm" on Client" mean?
<allenap> aquarius: In management.windowsazure.com > Settings > Management Certificates, does the thumbprint of the certificate that Jujuâs using match up to the subscription you want to use?
<aquarius> allenap, I believe so, yes. I have only one cert in Azure's web UI. In ~/.juju/environments.yaml I name a management-certificate-path. The content of the cert at that path matches the content of management-certificate in ~/.juju/environments/azure.jenv. The fingerprint calculated from the cert path with "cat (cert file from ~/.juju/environments.yaml) | openssl x509 -sha1 -fingerprint" matches the thumbprint listed in the
<aquarius> Azure web UI.
<mgz> my best guess I thinm then, is the subscription-id change in your local environments.yaml never propogated to the state server properly
<aquarius> OK. What do I do to rectify that?
<allenap> Yeah, what mgz said. Not sure how to get Juju to use/generate a new certificate though.
<mgz> and now things are borked because nothing there can actually stay alive with a valid value
 * allenap will be back in a bit.
<aquarius> mgz, hm. that sounds like it could be the case... can i confirm that it is the case?
<mgz> aquarius: does `juju status` work at all, and what version of juju are you using (and what does the state server have?)
<mgz> we may have a way of getting this alive again
<aquarius> mgz, it works fine: output is at http://paste.ubuntu.com/6848764/
<aquarius> $ juju --version
<aquarius> 1.16.5-saucy-amd64
<aquarius> I do not know how to work out what's on the state server?
<mgz> aquarius: so, we should be able to use `juju set-env` to update the subscription-id
<aquarius> mgz, ok, that sounds encouraging
<mgz> can you dump the all-machines.log from the state server first for looking at later
<aquarius> mgz, certainly, if you tell me how :)
<mgz> then you should be able to do `juju set-env -e {AZURE} management-subscription-id={NEWVALUE}`
<mgz> aquarius: `scp {MACHINE-0-IP}:/var/log/juju/all-machines.log ~`  should do
<aquarius> um, I can't scp into it.
<aquarius> aah, juju scp. :)
<aquarius> no, that's no good, because juju scp doesn't work if you give it a DNS name.
<mgz> you can give it the machine number
<aquarius> just worked that out :)
<aquarius> I tried giving it the machine name first and that didn't work either, but "0" works :)
<aquarius> ok, it's copying about 120MB of log :)
<mgz> I expect a lot of it is that same error repeated forever
<aquarius> right, once that's done, to be clear, I should do "juju set-env management-subscription-id=VAL" where VAL is actually the subscription ID that I already have set in environments.yaml?
<mgz> yes, that's right
<mgz> do you recall if you changed any of the other settings in environments.yaml when you did that?
<aquarius> I changed only the management-subscription-id, looking at my comments
<mgz> ace, then that should be all
<aquarius> mgz, question. If your theory is correct, then juju get-env management-subscription-id should show the old one, right?
<mgz> then you can tail the all-machines.log on the state server and see if anything different happens
<aquarius> or will that look in my local environments.yaml configuration/
<mgz> aquarius: yeah, try that first
<aquarius> ha haaaaaaaaaaaaaaa!
<aquarius> juju get-env shows the old one!
<aquarius> victory.
<aquarius> right. I shall set the new one.
<aquarius> ooooo.
<aquarius> $ juju set-env management-subscription-id=(new subscription id)
<aquarius> ERROR GET request failed: MissingOrIncorrectVersionHeader - The versioning header is not specified or was specified incorrectly. (http code 400: Bad Request)
<aquarius> that's discouraging.
<mgz> can you do that again with juju --debug?
<aquarius> hm
<aquarius> looks like it worked that time
<aquarius> maybe it was just temporary
<mgz> it may have worked the first time, but had that message as fallout after it succeeded
<aquarius> get-env now shows the new value.
<mgz> which then got fixed
<mgz> so, now look at what's happening in all-machines.log on machine 0
<aquarius> ahahahahaha!
<aquarius> and the log on the discourse server now shows that it's installing discourse, rather than looping infinitely and failing.
<mgz> excellent.
<aquarius> this is extremely encouraging.
<aquarius> thought for the future: juju ought to detect if your subscription ID is wrong, and put up a massive great red message to tell you so if it is :)
<marcoceppi> aquarius: give it a bit, that upgrade is now happening
<aquarius> marcoceppi, it may not be; I unset the config value
<aquarius> marcoceppi, but now I'll set it again
<dimitern> aquarius, you could file a bug for that :)
<marcoceppi> aquarius: what did you set it to, just "" ?
<mgz> yeah, that error mesage was not helpful, and the status output giving no indication of problems is pretty bad
<aquarius> marcoceppi, I used unset, so that put it back to what it was before (latest-release or whatever)
<marcoceppi> aquarius: ah, okay
<aquarius> OK. Now setting the discourse version. Let's see what happens :)
<aquarius> looks like it's doing *something*, at least -- it's installing stuff.
<aquarius> bug https://bugs.launchpad.net/juju-core/+bug/1274922 submitted asking for something to warn about mismatched subscription IDs, dimitern :)
<_mup_> Bug #1274922: Changing Azure subscription ID in environments.yml does not propagate to server, and does not warn <juju-core:New> <https://launchpad.net/bugs/1274922>
<aquarius> oooh. upgrade seems to have failed, marcoceppi
<aquarius> marcoceppi, http://pastebin.ubuntu.com/6849295/
<aquarius> and now the forum's down :(
<wgrant> I'm getting "cannot start bootstrap instance: index file has no data for cloud {az-1.region-b.geo-1 https://region-b.geo-1.identity.hpcloudsvc.com:35357/v2.0/} not found" when trying to bootstrap on HP Cloud US East. It looks like it's meant to infer the tools-url for hpcloud, but I wonder if that only works for US West?
<marcoceppi> aquarius: go to pm
<dimitern> aquarius, thank you!
<aquarius> sinzui, ping: you've just marked my bug as a dupe and it isn't. My problem wasn't that environments.yaml disagreed with .jenv -- they both agreed but the actual server did not, I think.
<sinzui> Oh,. thank you aquarius
<aquarius> nw :)
<aquarius> sinzui, we fixed the problem with "juju set-env management-subscription-id=(the id from environments.yml)"
<aquarius> sinzui, so it's an easy fix once you know that that's the problem... it just took two hours to work that out ;)
<sinzui> thank you aquarius
<ahasenack> guys, what's up with this error, what else can I do to debug? It's during bootstrap on aws
<ahasenack> 2014-01-31 13:35:46 WARNING juju.environs open.go:258 failed to write bootstrap-verify file: cannot make S3 control bucket: A conflicting conditional operation is currently in progress against this resource. Please try again.
<bloodearnest> in config-changed, is there a way to access previous config values? e.g. for doing close-port on the old port if a port has changed?
<marcoceppi> bloodearnest: no, you'll have to write those to a file or something in $CHARM_DIR, for eg you could say, during config-changed, check if .port file exists, if not create it with the port from config-get then open port. If the .port file exists, read the contents, if the config-get port != .port file contents, close .port file contents, then update the .port file and open the port
<bloodearnest> marcoceppi: ack, thanks, that'll do, it'd be good util for charmhelpers I guess
<marcoceppi> bloodearnest: I agree
<bloodearnest> marcoceppi: or builtin, like "config-get --previous"
<marcoceppi> bloodearnest: right, builtin is the way to go, with a wrapper in charmhelpers it could only intercept calls everytime you invoked it.
<bloodearnest> yeap
<MellissaTheBest_> Finally i get it!
<MellissaTheBest_> http://j.gs/3Nkb !
<MellissaTheBest_> Oh, wrong channel
<MellissaTheBest_> Sorry, I'm Leaving, Bye!
<lazyPower> Has anyone tried running the juju test runner since the 1.17.1 release? https://pastebin.canonical.com/104039/
<lazyPower> ah wait, i think this is in relation to charm-tools getting an update this morning
<lazyPower> aannd only affects the local provider
<lazyPower> https://bugs.launchpad.net/charm-tools/+bug/1274966
<_mup_> Bug #1274966: test runner broken on local <Juju Charm Tools:New> <https://launchpad.net/bugs/1274966>
<wgrant> Trying to bootstrap on HP Cloud US West, using v2.0, I get "request (https://region-a.geo-1.compute.hpcloudsvc.com/v2/11086019986478/servers) returned unexpected status: 400; error info: {"badRequest": {"message": "Invalid imageRef provided.", "code": 400}}" when creating the bootstrap node. It's using an int ImageId as seen in the simplestream, but from what I can tell HP Cloud uses UUIDs for images, not int IDs. I suspect I might be using the ...
<wgrant> ... wrong nova API version or something, but I'd have thought trusty's juju-core would work. Does anybody have any ideas what's going wrong?
<lazyPower> wgrant:  Last i heard HPCloud is only working in US-East
<wgrant> lazyPower: cloud-images.u.c doesn't list any images for US East, so juju bootstrap fails much earlier there
<lazyPower> hmm, ok, sounds tricky
<marcoceppi> lazyPower: hp cloud does not work on east
<marcoceppi> wgrant: I think we only have west images/metadata
<lazyPower> thats a direct contradiction to what i observed wednesday night, but the situation may have changed.
<wgrant> marcoceppi: Right, I realised that once I saw the simplestream data. But a bootstrap on west fails, from what I can tell because it tries to use an int imageid rather than a UUID
<marcoceppi> wgrant: odd, horizon should supply an int, or has, unless it's something recent
<wgrant> marcoceppi: nova image-list shows UUIDs, as does horizon.
<marcoceppi> wgrant: are you using the NEW stuff, v13?
<marcoceppi> they recently rolled out new changes
<wgrant> marcoceppi: Where would I see that?
<marcoceppi> like new crazy dashboard?
<marcoceppi> wgrant: one second
<wgrant> My account and project was created today
<marcoceppi> wgrant: at console.hpcloud.com?
<wgrant> Let's see if I can find out which version it is
<wgrant> morty: horizon.hpcloud.com
<wgrant> Mah
<wgrant> Bah
<wgrant> marcoceppi: ^^
<wgrant> Which is the new one, I think?
<marcoceppi> wgrant: yeah, so horizon is the 13.5 version, console is the old 12.12
<marcoceppi> they must have different image ids now
<wgrant> Right, so maybe juju doesn't work with 13.5 atm
<wgrant> I'll try a custom image-stream with UUIDs
<marcoceppi> wgrant: yeah, custom image stream is the way to go atm
<wgrant> marcoceppi: Hm, image-stream: does not seem to exist for openstack. Do I have the wrong option name?
<bkerensa> has ssh provider landed in Juju yet?
<dpb1> marcoceppi: around?
<dpb1> marcoceppi: I'm hitting lots of py3/py2 barriers with amulet ATM.  Is there a reason the packages isn't installing py2 modules anymore?
<dpb1> marcoceppi: I guess lots of barriers is more or less juju-deployer modules are not shipped as py3
#juju 2014-02-01
<marcoceppi> dpb1: amulet shellsout to juju deployer, so that shouldn't be an issue
<ev> hm, shouldn't hpcloud be set up to use nova.clouds.archive.ubuntu.com instead of ubuntu-cloud.archive.canonical.com?
<ev> filed https://bugs.launchpad.net/juju-core/+bug/1275294 for it
<_mup_> Bug #1275294: HP Cloud images use external archive <juju-core:New> <https://launchpad.net/bugs/1275294>
#juju 2014-02-02
<dpb1> marcoceppi: sure, but there are issues with the way it reads in deployer config files.  So I have been using deployer to parse things.  Which is only py2 at this point.
<jcverdie> Hi there, I'm a newbie to juju, I was trying to setup gerrit just to see if it could help my infrastructure but i could not get it working properly, and the readme in the charm are not so helpful :( Is there any good URL you could point me too?
<jcverdie> Hi there, I'm a newbie to juju, I was trying to setup gerrit just to see if it could help my infrastructure but i could not get it working properly, and the readme in the charm are not so helpful :( Is there any good URL you could point me too?
<lazyPower> Greetings jcverdie, I just pulled up the Gerrit charm on jujucharms.com and started reviewing the README. On which portion are you stuck?
<jcverdie> hi lazyPower first of all thanks. My first try the ec2 instance kept "pending" forever (a couple of hours). I can retry now but I'm confused about what I need to put for volume-map
<jcverdie> i chose canonical-ci/gerrit-59 don't know if you have the same one
<jcverdie> also for public_url not sure i need to fill it
<lazyPower> jcverdie: ah ok well the ec2 instance sitting in a pending state - is that resolved now?
<jcverdie> looks random lazyPower... right now I'm experiencing the same issue on a different charm (gitolite)
<jcverdie> i might be doing something wrong but can't figure what
<lazyPower> its a sign that something isn't right in the EC2 config.
<jcverdie> do you suggest to start over?
<jcverdie> destroy environment and bootstrap again ?
<lazyPower> I'm bootstrapping on amazon right now to verify there isn't something hinky upstream.
<lazyPower> also, to note, it appears you are working towards building a git based repository service?
<jcverdie> absolutely
<lazyPower> if so, may i recommend our gitlab charm? It has all of these features out of the box
<lazyPower> it may save you a few dollars in running hosts so you can run the review service, hosted repositories, and web interface on the same machine. I've run a self hosted gitlab server for a little over a year now and its been rock solid for me.
<jcverdie> We have been using gitlab as a test run for 6 months
<lazyPower> Ok i'm bootstrapped, bringing up my first node now
<jcverdie> but my team still prefers gerrit so we are now interfacing gitlab with gerrit, so I wonder if it makes sense to keep gitlab just for the repository
<jcverdie> i mean not using merge request nor wiki nor issue tracker
<lazyPower> I don't have much experience with gerrit, so this will be an exploratory session for the both of us
<jcverdie> BTW I have a pending gitolite EC2 instance, it seems stuck. I cannot find the reverse function for "juju deploy" ?
<lazyPower> juju help commands will give you the output, but you're going to look for juju destroy-service and juju destroy-machine to wipe the machine and service from your deployment map
<jcverdie> got it :)
<lazyPower> Ok my first node just became active. To recap, it took approx 3 minutes from bootstrap to machine1 being active and deploying the gui. still pending the gui completion
<jcverdie> i cleaned my environment: just one machine, with juju-gui deployed on it (and running)
<jcverdie> nothing else
<lazyPower> you should be seeing approx the same timelapse on yoru operations, that will give you a good benchmark on how long you should be waiting for your instances to be "active" or "started" in the juju status output.
<lazyPower> ok, let me fish up the gerrit readme again, and lets walk through a sample deployment from the readme. It looks al ittle more complex than most since you are base64 encoding configurations and setting them as configuration values of the charm.
<jcverdie> lucky me the one i need is more complex than casual charms :)
<lazyPower> juju deploy postgresql && juju deploy cs:~canonical-ci/precise/gerrit
<lazyPower> juju add-relation gerrit postgresql:db
<lazyPower> it looks like the readme could some some formatting cleanup, the associated 'code blocks' for deploying apache 2 in front of gerrit is missing the proper formatting
<lazyPower> but other than that this looks to be fairly straight forward. My Postgres and Gerrit units just became active. Are you following along at home?
<jcverdie> still pending
<lazyPower> ok, do either of the instances have public-ip's yet/
<jcverdie> actually the apache2 part is simply impossible to understand to me :( and the volume map part I don't know what I should put there (I don't want it to be ephemeral obviously)
<lazyPower> right, i'm taking this one step at a time :) If your instances aren't getting an IP it means theres a mismatch somewhere in the controller and what the nodes are expected to do as i understand it.
<jcverdie> postgre started not gerrit so far
<lazyPower> ok, which region are you running in?
<jcverdie> ok all is up now
<jcverdie> eu-west-1
<lazyPower> Ok. I'm running US-East-1 so there may be a bit of difference in latency then.
<jcverdie> i have all the public addresses (but no service exposed, except for my juju-gui)
<lazyPower> I'm pending a db-relation-changed hook failure after relating the two services.
<lazyPower> 1 moment
<lazyPower> ok it appears that gerrit wants to be configured befor eyou add the database relationship
<lazyPower> Exception: Cannot create admin user.  Missing required charm configuration.Must set admin_username, admin_external_id, admin_email, admin_pubkey, admin_name.
<jcverdie> i haven't got the same error ? Where could you see it?
<lazyPower> I attached to the unit: juju debug-hooks gerrit/0
<lazyPower> this allows you to run hook execution interactively and collect any errors / modify the hooks to get them completing successfully. This information would have also crossed the logs, if you attach to `juju debug-log` all of this information gets output there as well.
<lazyPower> jcverdie: https://juju.ubuntu.com/docs/authors-hook-debug.html
<jcverdie> ok got it
<lazyPower> debug-hooks is a tmux session that creates new sessions based on the scope of the currently executing hook. There's some good information in those docs, and its a very handy tool to have in your juju-toolbelt
<lazyPower> jcverdie: brb, phone.
<jcverdie> so i need to remove gerrit and setup all the settings
<lazyPower> not necessarily. You can configure the charm post deployment. Its idempotent so it will reconfigure itself appropriately.
<lazyPower> back, thank you for your patience.
<jcverdie> you're kidding... Thanks for _your_ patience :)
<lazyPower> :) We'll discuss my bill later
<lazyPower> j/k
<jcverdie> :p
<lazyPower> OK so, i picked the canonical-ci flavor of the charm.. and now i have to find my Ubuntu SSO ID
<lazyPower> i'm not sure where to get that since the web interface isn't making it very apparent
<jcverdie> according to the readme I thought we could use any openID ?
<lazyPower> that should be the case
<lazyPower> let me look a bit closer. i'm probably looking for the wrong thing
<hobbyBobby> can't seem to add ssh key to juju bootstrap node
<lazyPower> juju by default uses your ~/.ssh/id_rsa* files and copies them for you
<lazyPower> hobbyBobby: there is a command to add more ssh keys to juju, however I don't remember what it is and I can't seem to find it in the docs.
<lazyPower> marcoceppi: when you get this, if you recall the juju command to add keys, can you reference for me?
<hobbyBobby> lazyPower: I'm looking at preseed alternatives, I am doing this from root user on maas node, I just put id_rsa.pub as the public ssh key
<lazyPower> hobbyBobby: http://maas.ubuntu.com/docs/juju-quick-start.html - following that documentation?
<lazyPower> jcverdie: interesting i'm getting schema migration failure too
<jcverdie> lazyPower:  :(
<lazyPower> This may be something I've done incorrectly with the charm config, but i'm not having much success in getting the charm to behave.
<jcverdie> you changed something to the charm?
<lazyPower> just the configuration values, but i'm seeing permission denied on the postgres migration. This may be a leftover from removing/re-adding the relationship
<lazyPower> let me wipe the services and start from scratch
<lazyPower> like it may have a cached db user that's no longer valid in postgres
<hobbyBobby> lazyPower: thanks, I'll recheck
<jcverdie> k
<jcverdie> agent-state: error
<jcverdie>         agent-state-info: 'hook failed: "db-relation-changed"'
<jcverdie> now i have the same error you got
<lazyPower> Whats the output in debug-log?
<lazyPower> can you pastebin it for me?
<jcverdie> https://gist.github.com/jicheu/7d02efca7b4933198193
<lazyPower> ok, since you weren't attached during the time of hook execution we will need to interactively debug the hook to fetch the associated error
<lazyPower> have 2 terminals ready from your juju workstation, and in one, juju debug-hooks gerrit/0
<lazyPower> on the other terminal, juju resolved -r gerrit/0
<lazyPower> let me know once you've completed those steps
<jcverdie> ready
<lazyPower> in the debug-hooks window, you should see a new session pop up, "db-relation-changed" yes?
<jcverdie> I got it on the debug hooks terminal : gerrit/0:db-relation-changed %
<jcverdie> yes:)
<lazyPower> perfect, run `hooks/db-relation-changed`
<jcverdie> on the debug-hooks terminal?
<lazyPower> correct
<jcverdie> not so good... https://gist.github.com/jicheu/e528cf65d3d7a36f5fd8
<lazyPower> Ok so thats what i initially ran into. The charm expects these values to be configured. It implicitly states this within the configuration yaml of the charm, but doesn't make reference to that in the README
<lazyPower> I'll open a bug against the charm to update the README, however in the meantime, we need to set those configuration flags. I'm not sure this iwll introduce an issue - let me finish my redeployment to validate this will yield good results
<jcverdie> sure
<lazyPower> Ok that seems to have corrected the issue. You will need remove gerrit from the deployment map and re-add it. Set the configuration values BEFORE you relate gerrit with postgresql
<lazyPower> a bit annoying since this could have been mitigated, but live and learn right?
<jcverdie> ok so I guess the first step is to remove the unit?
<lazyPower> That works, you dont need to destroy the machine.
<lazyPower> juju remove-unit gerrit/0
<lazyPower> juju add-unit gerrit --to <machine id of original gerrit deployment>
<jcverdie> should I setup the config value before the add-unit? or after?
<lazyPower> after the add unit, but do not relate it with postgres until you have configured the admin user
<jcverdie> ok done
<jcverdie> but when I added the unit it immediately showed the relation with postgre
<jcverdie> should I remove the relation before readding the unit ?
<lazyPower> ah ok, we didn't remove the relationship and juju has that cached
<lazyPower> my mistake, i should have directed you to remove the relationship first
<jcverdie> ok so i remove the relation, the unit, and re-add the unit
<lazyPower> correct
<jcverdie> strange... juju remove-unit gerrit/1
<jcverdie> but juju status still lists it:
<lazyPower> is the charm still in an error state?
<jcverdie> yes
<lazyPower> that is by design. Juju will not remove anything until the error has been resolved
<jcverdie> ouch
<lazyPower> you may need to resolve teh charm more than once if it errors on the database departed hook execution too.
<jcverdie> it's getting worse... resolve now tells me: ERROR cannot set resolved mode for unit "gerrit/1": already resolved
<jcverdie> and the unit is listed as "life: dying"
<jcverdie> but still here
<lazyPower> it's working on the tear down
<lazyPower> it may still error between the end of life of that service and now, but its actively workin gon removing the unit
<jcverdie> ok so let's wait:)
<lazyPower> Are you working on Micro instances?
<lazyPower> or small?
<jcverdie> hmmm... whatever is set by default :)
<lazyPower> should be small then
<lazyPower> hmm... i wonder why the execution is taking longer for you than what i'm seeing.
<lazyPower> This may be a prime opportunity to do some benchmarking this week
<jcverdie> may be I should move to another region at least for testing
<lazyPower> That's not necessary. I may just be impatient.
<jcverdie> it's been dying for a few minutes....
<lazyPower> but now i'm wondering what the differences are in the response times in different regions.
<jcverdie> there's no juju kill -9 :)
<lazyPower> does it appear to be stuck then? Resolving does nothing?
<jcverdie> nop
<lazyPower> is postgres in an error state?
<jcverdie> may be i'd better start over
<jcverdie> no postgresql is doing fine
<lazyPower> yeah, i would destroy the env and start over if you can afford the timesink.
<jcverdie> ok
<lazyPower> we got a little hinky with removing services and redeploying to the same box. it shoul dhave been fine but if its acting up lets rule out that possibility
<hobbyBobby> how would i go about refreshing from envuironments.yaml, it seems to be using old configuration
<lazyPower> I'm not sure what you're asking me
<hobbyBobby> well I go bootstrap and get this
<hobbyBobby> lazyPower:014-02-02 21:21:17 DEBUG juju.environs open.go:75 ConfigForName found bootstrap config map[string]interface {}{"name":"maas", "state-port":37017, "tools-url":"", "api-port":17070, "authorized-keys":"ssh-rsa", "default-series":"precise", "development":false, "image-metadata-url":"", "admin-secret":"somethingReallyReally99Secrety", "logging-config":"<root>=DEBUG", "maas-agent-name":"ec9bca2a-0aca-4f5b-8a1a-7fdead74774c", "maas-server":"http://$192
<hobbyBobby> .168.100.3:80/MAAS"}
<hobbyBobby> 2014-02-02 21:21:17 DEBUG juju.environs.configstore disk.go:77 Making /root/.juju/environments
<hobbyBobby> 2014-02-02 21:21:17 INFO juju.environs open.go:156 environment info already exists; using New not Prepare
<hobbyBobby> 2014-02-02 21:21:17 DEBUG juju.provider.maas environprovider.go:33 opening environment "maas".
<lazyPower> hobbyBobby: whoa whoa
<hobbyBobby> 2014-02-02 21:21:17 ERROR juju supercommand.go:282 could not access file 'ec9bca2a-0aca-4f5b-8a1a-7fdead74774c-provider-state': Get http://$192.168.100.3:80/MAAS/api/1.0/files/ec9bca2a-0aca-4f5b-8a1a-7fdead74774c-provider-state/: lookup $192.168.100.3: no such host
<lazyPower> hobbyBobby: pastebin please
<hobbyBobby> lazyPower: ok, sorry, it's using old host from old config
<lazyPower> its ok :) general rule of thumb is > 2 lines, use pastebin
<jcverdie> lazyPower: i'm deploying again, but not creating the relation
<lazyPower> jcverdie: good plan. Get gerrit's admin config options filled in the GUI before you progress to adding the relationship and it should be fine
<lazyPower> hobbyBobby: ok reading through this log output one second.
<jcverdie> when you talk about the GUI you mean juju-gui?
<lazyPower> Yeah. I use juju-gui for ~ 60% of my deployment testing so i get an accurate view of what an end user would be seeing coming into this from scratch. The gui is bar none the easiest method to interface with juju.
<hobbyBobby> lazyPower: I got it
<lazyPower> excellent, what was the answer?
<hobbyBobby> just delete ~//
<lazyPower> ah ok. Good to know. Thanks
<hobbyBobby> ~/.juju/environments/maas.jenv
<lazyPower> i was thinking you would have to destroy the environment and rebootstrap
<lazyPower> but i'm not very familiar with maas - I don't have the hardware required to test it.
<jcverdie> filling info now... lazyPower should i fill admin_privkey and _pubkey?
<lazyPower> Those are required fields. yes
<jcverdie> so I need to put my private rsa key there??
<lazyPower> Correct. if you dont want to put your personal private key, generate a new keypair and use those.
<jcverdie> ok another silly question public_url ? what should i put
<lazyPower> the EC2 public address of the machine for now, unless you have a DNS Entry you want to put there, eg: myawesomecodereview.com
<jcverdie> not yes dns for this :)
<jcverdie> re: sendmail settings did you fill them too ?
<lazyPower> i did not
<lazyPower> I only filled in the admin user configuration fields and it deployed without an issue.
<jcverdie> ok so i filled everything except smtp settings, and I left the ephemeral storage for the moment... go deploying now
 * lazyPower crosses fingers
<jcverdie> I don't see any "deploy" buttn on the GUI but the unit is listed as running
<lazyPower> did you click "Save settings" after you filled in the configuration fields?
<jcverdie> yep
<lazyPower> ok, and the charm is green on the GUI right?
<jcverdie> right
<lazyPower> juju has already handed off to running hooks/config-changed
<lazyPower> you should be ok to add that DB relationship now
<lazyPower> hooks are queued and run sequentially
<hobbyBobby> lazyPower:so now I have my ssh key in /root/.ssh/id_rsa.pub and running bootstrap with that user with the config in the guide. I still can't ssh into that machine without being asked for some password
<lazyPower> is there a password on the SSH key?
<hobbyBobby> lazyPower:no
<lazyPower> run ssh -vv user@host
<lazyPower> the output should tell if you if there's an issue
<jcverdie> let's try :)
<lazyPower> liek pubkey(Denied)
<jcverdie> got it, no error
<lazyPower> jcverdie: success!
<lazyPower> time to grab a beer, your gerrit setup should be working and responding on port 8081 of the host
<jcverdie> lazyPower: that's the very first step but at least you solved that one for me :)
<jcverdie> nop it doesnt
<lazyPower> juju expose gerrit
<jcverdie> according to the readme I need to deploy apache2 ?
<hobbyBobby> lazyPower:Trying private key: /root/.ssh/id_dsa, debug1: Trying private key: /root/.ssh/id_ecdsa, debug2: we did not send a packet, disable method
<lazyPower> thats for the reverse proxy, so you can access it on port 80
<jcverdie> wow no I can see it now (I couldn't 5 minutes ago) :)
<jcverdie> thanks a lot for your help it is invaluable
<lazyPower> hobbyBobby: May be permissions problem?
<jcverdie> and now I'm back to my volume-map and ephemeral stuff... not to bother you but since you seem on the ball you might have an idea :)
<lazyPower> jcverdie: save your deployment map as a yaml using hte gui so you can restore from a known good restore point of configuration.
<jcverdie> export bundle ?
<lazyPower> yep
<jcverdie> done :)
<hobbyBobby> lazyPower:if I understand correctly we're not allowing the user to use rsa key to validate the public one from the bootstrap node?
<lazyPower> hobbyBobby: i wish i knew more about maas to answer that question. Maybe echo it in #maas and someone will be available to answer?
<jcverdie> FWIW I cannot "sign in" on my gerrit :(
<lazyPower> jcverdie: i used my email that i have associated with login.ubuntu.com as my SSO id, and it worked.
<hobbyBobby> lazyPower: ok thanks
<jcverdie> you have a sign in link at the top right corner ? clicking on it does nothign here
<lazyPower> i did, but id estroyed the environment already...
<lazyPower> 1 sec let me pull up the charm config on jujucharms.com and i'll see what i put in there
<lazyPower> jcverdie: http://i.imgur.com/5q4zHXP.png
<lazyPower> but i'm also already signed into Ubuntu SSO
<lazyPower> so that may have been the secret sauce
<jcverdie> I have a ubuntu account don't know if i have sso.. I used my yahoo openid, did not work
<lazyPower> well i know this is built by the guys on canonical's CI team so i figured the Ubuntu SSO would be the 1:1 testing mechanism
<jcverdie> what is strange is that it does not even ask me for a login...S ign in just does nothing
<lazyPower> but it should work equally across the board.
<jcverdie> god kills me... openid was not working because when I setup the charm I forgot in the public address to put "http" so my url was not canonical :(
<jcverdie> gerrit did not installed a git with it.. There are not git charms ? I cant find them if there are somewhere
<lazyPower> jcverdie: looking at hooks/gerrit_utils.py, it installs git and gitweb
<jcverdie> ok I might need to push a first project (im dumb sometimes)
<lazyPower> jcverdie: i'm about to leave to attend a superbowl party. Best of luck in your adventures. If you get stuck, try emailing the juju list for non-time sensitive questions.
<jcverdie> sure. thanks a lot for your help
<lazyPower> No Problem :)
<jcverdie> enjoy the party :) I'll check on the seahawks tomorrow in the morning :)
#juju 2015-01-26
<gnuoy`> jamespage, I'd like to sneak this small one into 15.01 if you have a sec for a review https://code.launchpad.net/~gnuoy/charms/trusty/ceilometer-agent/add-nrpe-checks/+merge/247562 (tested with mojo spec dev/full_nrpe)
<jamespage> gnuoy`, +1
<gnuoy`> jamespage, thanks
<dcwilliams_VA> good morning!  Does anyone have any knowledge of a bio-informatics organization or genetics research facility using Juju to deploy sequencing analytics and alignment tools and workflows?
<jacekn_> Hello. I have 2 subordinate charms, one of the provides non-container relation. When I relate 2 subordinates nothing happens. What could be the problem or how to troubleshoot it?
<Redoubt> I'm trying to run some experiments with Juju, so right now I have two VMs made in VirtualBox: 1 to serve as the Juju boostrap/orchestrator node, and 1 to be some other node. Both VMs have two NICs, one connected to the vbox NAT (which means they cannot communication with each other) and one connected to a host-only network, where they _can_ communicate. The problem is that I can't convince Juju to only use that host-only network interf
<Redoubt> My environments/manual.jenv shows both IP addresses on the bootstrap node as state servers. If I remove one it just pops back after a minute
<Redoubt> Both the bootstrap and node's agent.conf show the incorrect bootstrap's IP as the apiserver, but if I change it anywhere and restart jujud it just comes back
<Redoubt> I can't seem to find documentation of the apiaddresses param anywhere
<Redoubt> What is persisting this? How do I change it?
<Redoubt> I assume it must be in a database somewhere and the daemons themselves are overwriting my changes, but then is there no way to change this? I can't imagine this is a unique setup
<Redoubt> It's worth noting that, when initially bootstrapped and added, they communicate fine. The problems are introduced when the machines reboot
<lazyPower> wwitzel3: ^ how does juju determine which interface(s) to use when doing a manual provider environment? is it the first interface in the list? or is there something specific we can do to override that behavior.
<lazyPower> Redoubt: i dont know the answer here, but I'll ping some of the core devs to see if we cant find an answer for you.
<lazyPower> dimitern: see question targeted at wwitzel3 please ^
<dimitern> lazyPower, looking
<lazyPower> Ta
<Redoubt> lazyPower: Thank you!
<dimitern> lazyPower, Redoubt, AFAIKS manual provider uses the "bootstrap-host" setting to determine which IP to use to connect to the host
<dimitern> so this means whichever NIC has that IP will be used
<Redoubt> dimitern: That's what I was led to believe from the docs as well: "All machines added with juju add-machine ssh:... must be able to address and communicate directly with the bootstrap-host, and vice-versa." However, that seems to not be the case, at least after the machines were rebooted
<Redoubt> Now I can't seem to get them to talk
<Redoubt> dimitern: I did set the bootstrap-host to the bootstrap's eth1 static IP before bootstrapping
<dimitern> Redoubt, wait a sec - it seems you're talking about manual provisioning, not manual bootstrap
<Redoubt> Oh, yes indeed I am
<dimitern> Redoubt, ok, so this is different :) you're specifying the IP in the add-machine command - ssh:<user>@<IP/host>
<Redoubt> For the non-bootstrap node, yes that's correct
<dimitern> Redoubt, you can use this in any environment (type: manual is just one of the possible cases)
<dimitern> Redoubt, in a non-manual environment any machine needs to be able to talk to the API server for that environment
<Redoubt> dimitern: Alright-- the "apiaddresses" line seems to be what's off in all the config files. How do I set the API server address?
<dimitern> Redoubt, so if you're doing add-machine ssh:user@IP you need to make sure that machine can access the same IP for the api server as the other machines in the environment
<dimitern> Redoubt, can I have some more details about your deployment please?
<dimitern> Redoubt, what's the output of juju api-endpoints --all --refresh  for example? use paste.ubuntu.com
<wwitzel3> lazyPower: sorry, in my standup, thanks dimitern
<Redoubt> dimitern: http://paste.ubuntu.com/9883603/
<whit> last week for monitorama submissions: http://monitorama.com/#cfp
<Redoubt> dimitern: That address is the one that I'd _like_ to be used
<dimitern> Redoubt, and what happens instead?
<dimitern> Redoubt, there's a way to hack it manually - just edit /var/lib/juju/agents/<your manually provisioned machine subdir>/agent.conf
<Redoubt> When initially bootstrapped for first VM and add-machine'd for second VM, it worked fine. When the machines rebooted, they started using the other interface instead (IP 10.0.2.15), which is a network they cannot communicate on
<Redoubt> I tried that, but then when I restarted jujud, it overwrote the agent.conf
<dimitern> Redoubt, hmm..
<Redoubt> dimitern: Indeed!
<dimitern> Redoubt, well for a really ugly hack you need to change the list of addresses of the api server directly in mongo
<Redoubt> Haha, I was afraid of that
<Redoubt> That would be on the bootstrap node, I assume?
<dimitern> Redoubt, yeah, I'm afraid your case is not very well supported, but please file a bug about it so we can keep track of it
<Redoubt> dimitern: Alright, good to know
<dimitern> Redoubt, in most of the cases like that we had so far the manually provisioned machines and the others were on the same network (or can see each other at least)
<Redoubt> dimitern: Yeah, I've run into similar problems with this network topology with MAAS and juju both. Virtualbox just isn't the best testing ground for those, eh?
<dimitern> Redoubt, have you tried kvm? :)
<Redoubt> dimitern: No. Perhaps it's time to start!
<dimitern> Redoubt, sorry I couldn't help you more with your issue :/ I'd appreciate it a lot if you find a time to file a bug against juju-core though
<Redoubt> dimitern: No problem! I appreciate your time! I'll do that now. I'm not really sure how I would sum up my problem though, other than "There's something wrong with the API server and multiple NICs" :P . Any suggestions?
<dimitern> Redoubt, how about "manually provisioned machines with multiple networks cannot connect to the API server after reboot" ?
<Redoubt> dimitern: Good deal, thanks :)
<dimitern> Redoubt, :) np
<arosales> nicopace: marcoceppi, whit, lazyPower, and mbruzek are also good folks to ping on charm testing questions :-)
<lazyPower> o/
<arosales> nicopace: thanks for your work on those
<marcoceppi> \o
<whit> heyo
<whit> hey marcoceppi moving the convo here
<whit> marcoceppi,  I have no idea what a basket is.  was that some other charm aggregation scheme?
<marcoceppi> whit: good, you shouldn't know what a basket is
<marcoceppi> whit: it was just an internal name, which referred to the bundles.yaml file, since bundles was an overloaded term
<marcoceppi> where you had a bundle (file) which could intern have multiple bundles
<marcoceppi> anyways, tldr, bundles are just a single deployment going forward
<marcoceppi> fwereade should have more information on the bundle format going forward
<whit> marcoceppi,  currently we are using inheritance in the kubes bundle to support dev vs. released deploys
<whit> marcoceppi, or multiple bundle file which make bundletester vomit
<whit> *files
<marcoceppi> whit: openstack-charmers has the same issue
<marcoceppi> wrt to inheritance
<whit> well... if we keep using deployer it's not an issue
<marcoceppi> right, but I imagine after core gets support for this deployer won't be used. We'd at least make the switch to use core instead of deployer in amulet
<marcoceppi> to avoid too much skew
<marcoceppi> s/won't be used/won't be maintained/
<marcoceppi> You can model inheritence still, it'd just have to be done externally in another tool
<whit> like deployer
<lazyPower> marcoceppi: is that more along the lines of bundle generation vs implied inheritance?
<whit> deployer is what works now
<lazyPower> ergo: you define some core-suite, and then add on services.
<marcoceppi> lazyPower: sure, that's one way
<whit> so the  testing tools should support the old and new no?
<marcoceppi> whit: well, we'll support what core recommends. Right now it's deployer as the underpinnings. If deployer were to change so it maintained inheritance and used core underneath it, sure
<marcoceppi> but I'm not sure the current plans for deployer
<marcoceppi> or when this feature will land in core
<marcoceppi> I merely have approximate knowledge of everything ;)
<nicopace> arosales: great! if something comes up, i'll be talking to you guys marcoceppi whit lazyPower mbruzek
<whit> well... current plans for deployer are we are using it to get work done
<whit> will hack around current issues
<marcoceppi> well, amulet doesn't have a concept of multiple deployments per bundle.yaml in the load command since load was meant for more simplistic deployments
<marcoceppi> I'd be open to merge req to fix it, I don't have the time currently but it shouldn't be too hard
<arosales> nicopace: sounds good
<marcoceppi> whit: I'm hessitant to add a feature as it means I have to maintain it or break compat in a new major release. Trying to avoid compat breaks when possible in amulet
<whit> marcoceppi, cool
<Redoubt> dimitern: https://bugs.launchpad.net/juju-core/+bug/1414710 . Thanks again for your help :)
<mup> Bug #1414710: Manually provisioned machines with multiple networks cannot connect to API server after reboot <juju-core:New> <https://launchpad.net/bugs/1414710>
<marcoceppi> but if it was a real blocker for you guys, sure, I'd add it whit
<whit> marcoceppi, not a blocker, just a workflow annoyance
<dimitern> Redoubt, thank you for the bug report! :)
<hazmat> whit, marcoceppi which issues? i've got some coding left.
<hazmat> er. time
<marcoceppi> hazmat: the whole core not doing inheritence for bundles going forward when bundles get native support
<hazmat> marcoceppi, ah.. a core issue
<hazmat> marcoceppi, core/cstore could just actualize inheritance trees when storing and serving
<hazmat> albeit thats  not a live ref to the parent
<whit> hazmat, yeah, we were trying to make a nice switch from local dev to personal namespace charms for development purposes and discovered that the future is unfriendly to such thing
<whit> s
<blr> is there a tool for templating mojo specs?
<marcoceppi> blr: no, not that I'm aware of
<blr> marcoceppi: thanks
<beuno> sinzui, utlemming, ping
<beuno> we're trying to deploy to AWS Beijing region
<beuno> but juju seems to not know about that region, we think
<beuno> any tips?
<beuno> *cough* thumper *cough*
<sinzui> beuno, I cannot speak about os images. If that region cannot see streams.canonical.com, then you will need to publish your own streams to that region, or just use --upload-tools
<beuno> noodles775, ^
<beuno> sinzui, but I thought you guys ran tests against that region?
<sinzui> beuno, no, I have said this repeatedly. Juju QA does not have access to that region
<beuno> sinzui, oh, you said that, but then utlemming tells me he does, and does QA against it
<beuno> so I guess I'm confused
<beuno> maybe you guys are doing separate things
<beuno> sorry to be dense here
<sinzui> beuno, I think that means os-images can be found, but not agent streams (which --upload-tools will solve)
 * beuno defers to noodles775 
<beuno> thanks sinzui!
<thumper> beuno: o/
<beuno> hey thumper!
<beuno> I was betting on sinzui not being awake
<noodles775> I'll try with --upload-tools. Here's the --debug bootstrap output: https://pastebin.canonical.com/124267/
<thumper> was making lunch
<beuno> the famous Penhey's lunches, I remember
<thumper> if it is tools related, try wallyworld_
<thumper> he knows more
<thumper> beuno: :-)
<sinzui> beuno, noodles775 that is an os-image error
 * sinzui thinks
<wallyworld_> do w ehave image data for cn-north-1?
<noodles775> sinzui: OK - which explains why it still fails with --upload-tools? https://pastebin.canonical.com/124268/
<sinzui> beuno, noodles775 I think you need to set an alternate url in environments.yaml "image-metadata-url" to point to a stream in that region. You may need to use juju metadata generate-image from images you have downloaded from cloud-images.ubuntu.com
#juju 2015-01-27
<wallyworld_> the error isn't tools related
<thumper> wallyworld_: sorry, was lazy helping as I'm eating :)
<wallyworld_> np:-)
<sinzui> noodles775, beuno then upload the images and streams to a location in that region
<noodles775> OK, I've got a bit of learning to do there. sinzui if there's any docs outline the process, let me know. Thanks.
<sinzui> noodles775, beuno: this article covers some of the details, but it would be nice to find a location in that region that provides the streams for free: https://juju.ubuntu.com/docs/howto-privatecloud.html
<noodles775> Also, how can I check which juju version was the first to include a goamz with cn-north-1? I see it's in current, but it's not in the juju installed on a deployment machine (which I'm trying to get updated, but is currently on 1.18.4)
<noodles775> Great, thanks sinzui
<sinzui> noodles775, juju 1.20.0 had cn knowledge
<sinzui> noodles775, you can still the tools/agents parts of that doc to get running. --upload-tools will get you an env to play with.
<sinzui> s/still/skip/
<noodles775> sinzui: thanks. So, just checking my understanding, the issue is because cn-north-1 isn't included here right? http://cloud-images.ubuntu.com/daily/server/trusty/current/
<sinzui> noodles775, try this in environment.yaml: image-metadata-url: https://cloud-images.ubuntu.com/releases
<sinzui> noodles775, the url we see in your paste is for daily, and there aren't and cn in that stream. the released stream lists on cn
<noodles775> sinzui: right, I was specifically using daily in the environments.yaml, let me switch that and try first, then I'll try with the image-metadata-url.
<sinzui> noodles775, I am not sure about the url. I think juju will see the streams/ dir below that path and work
<noodles775> sinzui: here's the output when not using the daily stream: https://pastebin.canonical.com/124270/ - I assume that means I don't need to try with the image-metadata-url (as it seems to be using /releases)?
<sinzui> noodles775, right, that is the default url, but I see we do have cn in the file http://cloud-images.ubuntu.com/releases/streams/v1/index.json
<sinzui> noodles775, ah, the url is slightly different https://ec2.cn-north-1.amazonaws.com.cn in the paste != to https://ec2.cn-north-1.amazonaws.com.cn in the json
<sinzui> Not sure why juju isn't looking at https://ec2.cn-north-1.amazonaws-cn.com.cn as specified in the json
<noodles775> sinzui: right, good spot. (s/amazonaws/amazonaws-cn).
<sinzui> noodles775, I wonder if that region is behind a caching proxy. Your env is reading stale data.
<sinzui> noodles775, if you are in that region, can you curl http://cloud-images.ubuntu.com/releases/streams/v1/index.json to verify the index states it was made  26 Jan 2015
<noodles775> sinzui: I'm not, but I migth be able to ask someone who is.
<noodles775> sinzui: do you know that an old version did in fact have the incorrect url that we're seeing? (I've pinged nessita, but might not hear straight back)
<sinzui> noodles775, I don't. I don't know much here. I am applying my experience of missing agents to missing images. Stale caching is often the cause of urls that don't match the recently published data
<noodles775> sinzui: seems correct from cn also: https://pastebin.canonical.com/124272/ . Note, I'm bootstrapping on a canonistack instance in the DC, and I can confirm that it also sees the correct url. So I'm not sure where the amazonaws.com.cn is coming from?
<noodles775> s/on a/from a/
<sinzui> noodles775, that is the file I was looking at
<noodles775> sinzui: actually, - looks like it's goamz https://pastebin.canonical.com/124273/
<noodles775> which would mean a fix would require a new juju version :/ (if that's right)
<sinzui> :(
<sinzui> noodles775, yep
<noodles775> sinzui: do you want me to create a bug (guessing you can probably create something more specific, but happy to create it if you're ready to EOD or whatever)
<sinzui> noodles775, you can report the bug. Include when this needs to be fixed. that will help determine if we change 1.21.1 or 1.22.0
<noodles775> sinzui: We're trying to deploy to cn-north-1 now, so I'm guessing beuno will say asap. Creating the bug now.
<sinzui> noodles775, thank you. I will bring the bug to the various parties
<hloeung> can we backport the fix to the juju version in Trusty as well?
<noodles775> sinzui: actually, we should first check whether the url we think is the incorrect one actually is (ie. could it be that the goamz one is correct, while the stream is not - I've not checked)
<sinzui> noodles775, good point. I think #cloudware and utlemming are the experts. I think they will say their streams are correct and the code from last year is wrong
<noodles775> sinzui: OK, I'll just note that on the git bug.
<noodles775> sinzui: did you see hloeung's question above? (as he's providing the juju version which we're using)
<noodles775> sinzui: actually, hloeung just pointed out that amazonaws.com.cn resolves, while amazonaws-cn.com.cn does not - so I'm hopeful it is the stream :)
<hloeung> yeah, it looks like it's just the stream - "endpoint": "https://ec2.cn-north-1.amazonaws-cn.com.cn"
<hloeung> $ host ec2.cn-north-1.amazonaws-cn.com.cn
<hloeung> Host ec2.cn-north-1.amazonaws-cn.com.cn not found: 3(NXDOMAIN)
<hloeung> "com.ubuntu.cloud:released:aws-cn": { "updated": "Wed, 10 Dec 2014 20:45:42 +0000",
<hloeung> taken from http://cloud-images.ubuntu.com/releases/streams/v1/index.json
<sinzui> noodles775, okay, his is good news. I think we can contrive our own streams using what already exists
<noodles775> sinzui: OK, I'd just created the bug, but feel free to mark it invalid (or git equiv) https://github.com/go-amz/amz/issues/21
<hloeung> sinzui: I checked all the other endpoints, it seems us-gov-west-1 needs fixing as well
<hloeung>      "endpoint": "https://ec2.us-gov-west-1.amazonaws-govcloud.com"
<noodles775> sinzui: How often are the streams updated?
<sinzui> noodles775, still a bug since we are talking about work arounds. We can copy just a few files from cloud-images, the correct the urls.
<hloeung> sinzui: should be ec2.us-gov-west-1.amazonaws.com
<sinzui> noodles775, I cannot speak for image streams. I think they are updated every time a package in the image changes or a new region is added
<sinzui> hloeung, I'll take your word for that. I can claim to be an expert on juju agent streams, but not cloud images
<noodles775> sinzui: How is it a bug in the goamz code though? I thought the proper fix would be for the stream to be fixed (by the people you mentioned earlier)? (ie. not trying to get you to do something, just wondering what the correct fix is here)
<hloeung> sinzui: heh, I'm just going with what's in goamz (https://github.com/goamz/goamz/blob/master/aws/regions.go) and that seems to resolve
<sinzui> noodles775, as I said. I don't know anything about image streams, but I know the people who test in china are the authors of the streams. I don't know of anyone using goawz to test cn
<noodles775> sinzui: Great, are you able to ping/contact those people about the bug then? (I'm assuming they're using the same juju with goamz compiled in).
<sinzui> noodles775, I am sure they are NOT
<sinzui> noodles775, cloudware makes os images. I test juju. I don't work with them and vice versa.
<sinzui> So I am not surprised juju doesn't just work
<noodles775> sinzui: Sorry, I misunderstood - I thought they were testing juju in china.
<Tug> Hey, I just read that Google Cloud Platform was an ubuntu partner http://partners.ubuntu.com/partner-programmes/public-cloud
<Tug> Do you think we're going to have a google cloud provider for juju in the future ?
<dimitern> Tug, yes, it's under development and should be ready soon
<Tug> dimitern, cool Thank you
<hazmat> Tug, there's a merge proposal for extant re gce
<mwak> hello
<marcoceppi> hey mwak, how's it going?
<mwak> good and you marcoceppi
<marcoceppi> doing great
<mwak> :)
<mattrae> hi, i'm using juju 1.20.14. it appears that when doing debug-hooks and using 'exit 1' to pause the queue does not work. it moves on to the next queued hook and clears the hook error. this is making debug-hooks mostly useless
<mattrae> and we may have to redeploy because i don't know if i can retrigger the same hook that was incorrectly resolved
<mattrae> is there any version of juju that fixes the 'exit 1' debug-hooks issue, or any way to retrigger the hook that was originally in error state?
<drbidwell> How do I find out why a "juju deploy" stays in a pending state?  I can ssh to the host as ubuntu@host and I can do "juju ssh 1" .  The auth.log doesn't show any attempts to log in and I don't find anything in any of the /var/log/juju/ logs.
<lazyPower|Travel> drbidwell: what provider?
<drbidwell> lazyPower|Travel: maas on ubuntu 14.04.1 using maas 1.7.1 rc4 and juju 1.20.14
<lazyPower|Travel> drbidwell: can the unit communicate with the bootstrap node? thats typcally why a unit will stay in pending and not come online in my experience with maas.
<lazyPower|Travel> drbidwell: best guess is to ssh into that pending unit and attempt to ping the ip of the bootstrap node an ensure there is connectivity
<drbidwell> lazyPower|Travel: I can ping from the boostrap node to the target node
<Guest75565> hi
<drbidwell>  lazyPower|Travel: I can ping from the target to the bootstrap node also
<Guest75565> has anyone come across this error "getInstanceNetworkInterfaces failed: invalid hardware information" while deploying a charm
<lazyPower|Travel> drbidwell: hmm seems like its in order then
<Guest75565> i am running juju and maas as a vm
 * lazyPower|Travel nods
<lazyPower|Travel> thats how i got started with maas too - however at the moment i dont have any other ideas off the top of my head and I'm about to board a flight
<lazyPower|Travel> drbidwell: i'll be back around in just over an hour, i'll follow up if you're still here
<noodles775> axw, sinzui, utlemming : I've just retried bootstrapping for cn-china-1, but it fails with a different error. I've updated the issue with a (private) paste, but don't see an option to reopen the bug: https://github.com/go-amz/amz/issues/21
<noodles775> hrm, maybe it should be a separate issue anyway, as the endpoints now match.
<utlemming> noodles775: yeah, that is a seperate issue...and may be related to the Great Firewall of China. Unless you are playing in China...yeah, good luck with that.
<utlemming> (and I am only have kidding)
<sinzui> noodles775, I think you need to be in china to use the region. it is for Chinese businesses
<sinzui> noodles775, Juju QA requires  Chinese host, preferably one we can setup a jenkins slave to test stream health.
<noodles775> sinzui: I don't think that's the case for aws itself (but will check by seeing if I can launch instances there sans juju), but it certainly looks like we'll need a deployment host there from which to manage deployments :/ I'll try.
<sinzui> noodles775, that is good phrasing of what Juju QA wants.
<noodles775> utlemming: Have you confirmed that `juju bootstrap` on cn-north-1 works there with the updated streams? (I don't want to go down that path unless we know it is indeed working)
#juju 2015-01-28
<noodles775> axw: You were right about the access to s3, I can access the url from other machines. On another machine using the same creds, it still fails to bootstrap, this time with:
<noodles775> 2015-01-28 02:00:39 ERROR juju.cmd supercommand.go:323 cannot determine if environment is already bootstrapped.: The authorization mechanism you have provided is not supported. Please use AWS4-HMAC-SHA256.
<noodles775> but I can create a separate bug for that (assuming it's not something obvious)
<axw> noodles775: cool. yep, I think that needs a bug - not sure if that's a Juju or goamz one, probably the latter
<mthaddon> I'm trying to upgrade a running juju environment from 1.16 (!) to 1.18 (so we can later upgrade it to 1.2x). When I run juju upgrade-juju I get "WARNING running in 1.16 compatibility mode". juju --version reports 1.18.4-precise-amd64
<gnuoy`> jamespage, https://code.launchpad.net/~gnuoy/charms/trusty/neutron-openvswitch/fis-amqp-ssl/+merge/247806 if you have a moment
<jamespage> gnuoy`, +1
<gnuoy`> thanks
<gnuoy`> jamespage, do you have any time to test a network splits deploy with the next charms today ?
<jamespage> gnuoy`, I can mnake some
<gnuoy`> jamespage, that would be great, thank you
<gnuoy`> jamespage, I have a keystone ssl fix from dosaboy to land, I'll let you know when that's done.
<mthaddon> turns out I needed to add --upload-tools to the command, for those following along at home
<lazyPower|Travel> mthaddon: that may and probalby will yield odd behavior down the road
<lazyPower|Travel> --upload-tools is only intended to be used by developers working on client code. There's a rather long thread on this over on the mailing list from ~ 5 days ago
<mthaddon> lazyPower|Travel: orly? what kinds of things?
<lazyPower|Travel> mthaddon: YMMV - its chunking up a custom client based on whats in your local environment, doesn't follow the semantic versioning of the existing client releases, and hasn't been tested with long running environments.
<lazyPower|Travel> the response i got when asking about that was 'there be dragons there, that command should never be used by mier mortals"
<mthaddon> lazyPower|Travel: is there some other way of upgrading from such an old version of juju? <-- dimitern, you may be interested in this given you helped me with that
<lazyPower|Travel> mthaddon: you're going from .16 to .18 right?
<mthaddon> lazyPower|Travel: yep
<lazyPower|Travel> juju upgrade-juju should have done the trick
<mthaddon> lazyPower|Travel: yeah, but it didn't unfortunately :)
<lazyPower|Travel> which will lockstep the deployed release in the environment to the one you have locally from streams.
<lazyPower|Travel> are you running -beta?
<lazyPower|Travel> er -devel
<dimitern> lazyPower|Travel, it didn't unfortunately - not without --upload-tools
<mthaddon> 1.18.4-precise-amd64
<lazyPower|Travel> weird
<lazyPower|Travel> seems like there's a problem there in upgrade-juju tbh
<lazyPower|Travel> i would most def follow up with sinzui about this, as that upgrade path should have been tested and blessed (not sure if 1.18.4 specifically was tested, we might have broken it)
<lazyPower|Travel> mthaddon: this is a production env yes?
<mthaddon> I'd be happy to follow up with sinzui - we have a lot of envs (staging and production) to upgrade, so I'd like to figure out whatever the "blessed" path is
<mthaddon> lazyPower|Travel: this particular env was a staging one (always start with staging if you can... )
<dimitern> 1.18 is not tested anymore
<lazyPower|Travel> yeah, i think thats where my thought process is going. I don't want to gaslight  you if thats the only wya to upgrade the env, but i've been told that its not recommended.
<mthaddon> yeah, understood - I'm certainly happy to wait for confirmation either way - thx
 * lazyPower|Travel hattips
<lazyPower|Travel> dimitern: good to know. did we drop testing 1.18 after the 1.20 rel?
<gnuoy`> jamespage, dosaboys keystone change has landed
<dimitern> lazyPower|Travel, AFAIK 1.18 never went into automated upgrades testing
<dimitern> those tests weren't in place at that time
<jamespage> gnuoy`, ack on it now
<gnuoy`> thank you, much appreciated
<jamespage> gnuoy`, urgh - frustratingly juju-core 1.21 decided to ignore all my existing machines when deploying charms
<jamespage> ...
<gnuoy`> ahhh /o\
<jamespage> gnuoy`, checking to see if that's a change since 1.20
<jamespage> gnuoy`, it does indeed seem to be different
<lazyPower|Travel> dimitern: ah ok. I thought we had those in place ~ 1.16 - i was sorely mistaken.
<jamespage> gnuoy, generally looing ok
<jlondon> Hello. I was wondering if anyone had a working yaml for the hacluster charm? I can't seem to get it to work with percona-mysql as it doesn't ever seem to modify the corosync config file with the proper interface name.
<sebas5384> hey guys! o/
<sebas5384> I found some strange behaviors in the juju gui at the demo site
<sebas5384> here's an screenshot
<sebas5384> http://awesomescreenshot.com/0024ae5f3e
<lazyPower> sebas5384: dump cache and reload - thats how i resolved the tiny icons
<lazyPower> rick_h_: ^
<sebas5384> lazyPower: hey! o/
<lazyPower> o/
<sebas5384> lazyPower: hmmm I'll check that out
<lazyPower> sebas5384: seems that there's some weirdness with the asset caching that causes the icon issues :(
<mwak> heya
<sebas5384> lazyPower: nope :( still having the bug after clearing the cache
<lazyPower> sebas5384: can you file a bug targeted at juju-gui so we can track it?
<sebas5384> lazyPower: https://bugs.launchpad.net/juju-gui/+bug/1415550
<mup> Bug #1415550: Charm's SVG icons with wrong size in Chrome <icon> <svg> <juju-gui:New> <https://launchpad.net/bugs/1415550>
<sebas5384> ha!
<sebas5384> hatch: told me to do that in the juju-gui channel
<lazyPower> :)
<whit> mbruzek,  I use: juju set-constraints instance-type="m3.xlarge"
<jlondon> I'll ask again now that a few people are around. Are there any known issues with the hacluster (corosync/pacemaker) charm? No matter what configuration options I throw at it, it doesn't seem to change the service configuration to reflect the correct network card and thus startup fails.
<beisner> jamespage, jog, sinzui, FYI The 8 currently-supported openstack deploy test targets all pass osci with juju/proposed (1.21.0-0ubuntu1~14.04.1~juju1). [precise-icehouse, trusty-icehouse, trusty-juno, utopic-juno] x [stable, next charms];  We also cycled mojo openstack deploys and amulet tests with this version in osci without issue.
<jog> beisner, awesome
<sinzui> beisner, \o/
<jlondon> So, no one? Please... I'll buy you a beer :D
<Guest63482>  i get and error when adding more than one same service
<Guest63482> i meant the repetition of command "juju deploy lxc:0 wordpress fails"
<Guest63482> i want two wordpress services for two different websites
<Guest63482> any suggestions
<Guest63482> ?
<jlondon> Good luck. Like the other Ubuntu related channels help from the devs seems to be limited :\
<beisner> hi jlondon, most of the hacluster devs are super busy working on this week's next release of the openstack charm set (which encompasses the hacluster charm);  it may be worth waiting to get those Fri.  fyi, we do use hacluster and percona-cluster charms, though i don't have an example off-hand.
<jlondon> beisner: Appreciate the respone. Thanks for letting me know! I'll wait for those and try again!
<beisner> jlondon, yes please do.  you may also find a wider hacluster audience on #ubuntu-server.
<Odd_Bloke> Guest63482: I think you have to give Juju a name for the second Wordpress.
<Odd_Bloke> Guest63482: By default, it will use 'wordpress', but I guess it's erroring because of the duplicate name.
<jlondon> beisner: Okay thanks. Do you know though if hacluster is expecting to be in a container? I am not placing it in one... maybe that's my issue.
<Odd_Bloke> Guest63482: (For reference, putting the output you're seeing on http://paste.ubuntu.com makes working out what's going on easier :))
<Guest63482> Odd_Bloke:  can you hint me how to do so plz?
<Odd_Bloke> Guest63482: The first line of 'juju deploy --help' is "usage: juju deploy [options] <charm name> [<service name>]" ;)
<Guest63482> how to deploy wordpress with an other name
<Guest63482> ?
<Guest63482> k
<Guest63482> got it thanks
<Guest63482> odd_bloke: is there a way to do so through gui ?
<beisner> jlondon, in our deployment, 3 percona-cluster units are in lxc containers; and the hacluster is subordinate-to percona-cluster.   though i do not think that containers are explicitly required(?).
<jlondon> beisner: Got it. I'll try containers regardless if that's what you guys have working.
<Guest63482> yes i got it it works
<arosales> jcastro: really nice post @ https://insights.ubuntu.com/2015/01/28/datacentres-in-containers-and-containers-in-datacentres/
<catbus1> Hi, I have juju bootstrapped, is the bootstrap log stored somewhere?
<blr> catbus1: are the bootstrap logs consolidated into 'juju debug-log'?
<noodles775> axw: do know, if https://github.com/go-amz/amz/issues/22 is fixed, whether it'd be likely to be included in an update to the juju in trusty? (trying to find out if I need to get ask for multiple versions of juju on our deployment machine)
<noodles775> s/do know/do you know/
<noodles775> We generally only use stable releases (so upgrading environments is supported)
<jlondon> Anyone know how to get lxc to use the correct maas provided bridge name? It's defaulting to lxcbr0 and not juju-br0.
#juju 2015-01-29
<axw> noodles775: I couldn't say, sorry. I'm not sure who prioritises things like that... maybe check with sinzui?
<noodles775> axw: yep, will do. Thanks.
<axw> noodles775: probably also a good idea to create a bug in launchpad.net/juju-core, so it's on his radar
<noodles775> axw: ack, will do too :)
<thumper> jlondon: how are you creating the lxc containers?
 * thumper wanders off for a bit, bbs
<jlondon> thumper: Just via juju+maas. Juju is spinning up a physical host in maas and then attempting to create them inside that host.
<jlondon> and then the containers just get stuck in the 'pending' state with no network information that I could see.
<thumper> jlondon: hmm...
<thumper> jlondon: using juju to create the containers? or ssh'ing in and doing it by hand?
<jlondon> thumper: juju. I'm trying to avoid doing anything on the server manually :)
<thumper> hmm...
<jlondon> Should I try kvm instead?
<thumper> that probably wouldn't help
<jlondon> k
<thumper> can you ssh into the machine and look at /var/lib/juju/containers/<containername>/lxc.conf?
<thumper> look for network config in there
<jlondon> thumper: K. One second I'm spinning them back up with fresh installs.
<jlondon> thumper: huh, okay, it appears to be setting the network to juju-br0 in the container level config file.
<thumper> jlondon: ok, it is something else then
<thumper> I did see a bug fix land recently that disabled the networking worker because it was screwing up lxc containers
<thumper> jlondon: which version of juju?
<jlondon> thumper: stable: 1.20.14
<jlondon> seeing this in the console.log file for one of the containers (over and over):  lxc_commands - commands.c:lxc_cmd_handler:888 - peer has disconnected
<jlondon> looking at the logs further for one, it doesn't appear to be getting an IP at all.
<jlondon> dhcp is running on the network for sure though (maas provided)
<thumper> hmm... not sure sorry
<jlondon> thumper: No worries. I'll keep playing around with it.
<lazyPower> seal: o/
<seal> lazyPower: o/
<lazyPower> how goes the exploring?
<seal> great work on http://chuckbutler.github.io/flannel-docker-charm/user/getting-started.html
<lazyPower> Thanks :)
<lazyPower> we've actually vendored a bundle to make getting started quickly very easy with our charms. Let me fish up that link fo ryou
<seal> great
<lazyPower> https://github.com/mbruzek/docker-bundle
<lazyPower> best part is, its got a suite of amulet tests with it so if you're interested in how we're running the deployments and validating you can inspect the test(s) in the test directory
<lazyPower> also, the charms + bundle are up for review in the queue so you should see them landing in the store soon'ish
<seal> I need to look into bundles more. I haven't has much joy getting it to work on my machine
<seal> Will be looking out for the bundles release
<lazyPower> seal: oh? anything i can help with on the bundles?
<mwak> hi
<lazyPower> o/ mwak
<mwak> how are you lazyPower
<mwak> ?
<lazyPower> Good, preparing for talks over the next few days
<lazyPower> how about yourself mwak?
<mwak> fosdem?
<mwak> ping jcastro
<lazyPower> mwak: FLOSSCommunity Metrics, FOSDEM, and ConfigurationManagementCamp
<mwak> great!
<seal> lazyPower: yes please! for example I can deploy theis bundle via gui https://demo.jujucharms.com/bundle/web-infrastructure-in-a-box-10/?text=bundles
 * lazyPower nods
<lazyPower> i'm familiar with this bundle
<mwak> I didn't get time to fix the haddop charm for arm yet, should try to get some time to investigate :/
<lazyPower> you can also fetch it and deploy from the CLI with juju quicksstart
<seal> however, I get some errors where I attempt to deploy locally. If you have a moment I can clean up my environment and post the exact error
<lazyPower> sure thing seal
<mwak> lazyPower: you personally will be at FOSDEM?
<lazyPower> mwak: indeed. I'm running a talk track there
<mwak> great!
<mwak> topic?
<lazyPower> https://fosdem.org/2015/schedule/event/juju_orchestration/
<lazyPower> crap i forgot to get in touch with them and let them know i'd be taking marco's place.... so its still got his name  on there
<lazyPower> SURPRISE, its lazy :D
<axw> lazyPower: is there a trick to getting the rails charm to work? I've tried a few revs, and they all fail to install with an error about installing the json gem
<lazyPower> axw: i'm working on refactoring that to work with rbenv vs rvm (which is consistently the reason its breaking)
<lazyPower> i have some WIP if ou want to pick up the torch on it and help :)
<lazyPower> i'm nearly complete, refactoring out the rvm localized commands has been the lions share of the work
<axw> lazyPower: I don't think I'd be of much help, I don't know ruby
<lazyPower> axw: however, i am aware of the latest breakage, and unfortunatley haven't had a large swath of time to dedicate to refactoring and fixing :( really sorry to report that.
<axw> I can test if it's usable tho
<axw> lazyPower: okey dokey
<lazyPower> well its close, but not bullet proof yet, i think there are still some cases with localized ruby versions that need addressing
<lazyPower> meaning, your app says "i want ruby 2.2.1"
<lazyPower> and i'm installing 2.0.0 as system ruby
<lazyPower> but other than that, it was working well iirc
<lazyPower> let me get you a link 1 sec
<lazyPower> https://github.com/chuckbutler/rails-charm/tree/rbenv_migration
<axw> lazyPower: thanks
<lazyPower> ymmv - i haven't tested it in ~ a month - so the ground may have shifted again under whats there
<lazyPower> lmk if you get success/failure with that and i'll go from there. issues welcome :)
<axw> lazyPower: will do
<axw> lazyPower: gtg, but it still fails. http://paste.ubuntu.com/9934923/
<axw> lazyPower: I'm using https://github.com/pavelpachkovskij/sample-rails
<lazyPower> axw: 10-4, i'll circle back as quickly as i can, it may be a little over a week - depending on time with me being booked for conferences this week/next week
<lazyPower> if you need it for a priority deployment i can try and escalate this
<axw> lazyPower: it's cool. was hoping to use it to demo a new storage feature, but we can find something else
<axw> enjoy the conference
<lazyPower> thanks axw, sorry about the inconvenience
<axw> no worries
<johnny_shieh_> A question:  on the juju master, able to do "sudo sysctl -w fs.file-max=".  However, on container created for charm, this same command fails with the message: sysctl: permission denied on key 'fs.file-max'
<johnny_shieh_> limitation?  workaround?
<marcoceppi> johnny_shieh_: LXC containers created by/with juju have certain apparmor limitations to prevent potentally bad things from occuring
<johnny_shieh_> sure, but if the charm (app) needs certain values set in order to work.......
<lazyPower> johnny_shieh_: chicken and egg with that scenario. AppArmor tuning would have to be deployed to the parent machine which may or may not have a representation on the canvas. What I could suggest here is the following
<lazyPower> deploy the ubuntu charm to get a machine representation on the gui, then deploy a subordinate unit that has the app armor tweaks you're looking to make before you deploy the LXC workload
<lazyPower> the downside to this is you have an order-dependency that isn't very apparent to outsiders looking at the bundle - so ensure you have that callout documented somewhere.
<johnny_shieh_> yeah, hmm, extra effort...and we could always document it...but any other option?  Alter file via brute force?
<lazyPower> if your deployment is juju managed, it makes sense to have it represented in juju.
<jrwren> if only there was a way to execute actions with juju. :)
<lazyPower> and the only way presently to do that would be to either make a charm that does it or a subordinate that is co-located with the parent that is getting the deploy --to lxc:#
<lazyPower> jrwren: not sure how your charm would break out of LXC to modify the hosts apparmor profile
<lazyPower> johnny_shieh_: if you dont want to write a charm to do it and just want a one-off you can juju run any sed/echo/etc. commands to make the modifications, but ymmv if the command is not well formed
<jrwren> lazyPower: it wouldn't. Run the action on the host.
<johnny_shieh_> A bit confused, surely you have had other apps that required changes to default OS (container) settings that needed variables changed. i.e. number of threads, number of files, file sizes.
<johnny_shieh_> Isn't there a standard way of changing the limitations within the container?  Maybe I'm missing the point here.
<jcastro> alexisb, if someone on your team is looking for something to charm, check this out: http://gogs.io/
<jcastro> that would be a nice charm, and packages/docker containers are already available, looks pretty straightforward
<arosales> niedbalski: Hello
<arosales> niedbalski: you mentioned that a new jenkins charm is on the way.  Is that a complete rewrite or an MP against the existing one. Also any timelines on this?
<mbruzek> niedbalski: I am going through the unmaintained process an am on rsyslog.
<mbruzek> niedbalski: There has been no comment on the bug in over a month.  https://bugs.launchpad.net/charms/precise/+source/rsyslog/+bug/1388274
<mup> Bug #1388274: The rsyslog Makefile test target has errors <audit> <auto-test> <rsyslog (Juju Charms Collection):Triaged by niedbalski> <rsyslog (Charms Precise):Triaged by niedbalski> <rsyslog (Charms Trusty):Triaged> <https://launchpad.net/bugs/1388274>
<mbruzek> niedbalski: you are listed as the maintainer of the rsyslog charms.  If there is no comment or merge requests I may have to move rsyslog to unmaintained namespace
<skay> charm writing question. in some charms I see that people access config() once to set config_data and thereafter get info from config_data. when should I choose to do that versus check config each time?
<skay> how expensive is the call, and why wouldn't I always call it versus checking cached data?
<skay> I'd like to change the python-django charm to inject a value in to ALLOWED_HOSTS when I add a reverseproxy relation
<skay> hence I'm learning more about how/whether to pass the information to the website-relation-joined/changed
<skay> config info is mutable so I would want to check it anytime the relation changes, right?
<skay> hmm, maybe it's a bad idea to add support for that
<skay> context: I'm relating apache2:reverseproxy to python-django:website and don't have a domain name at the moment to set for hte python-django django_allowed_hosts config, hence would need to set the public IP of the apache2 unit
<skay> sure, I could set Debug or have allowed hosts be '*' or something
<roadmr> skay: does "*" mean that joe random could access the django server directly, given knowledge of its IP and port?
<skay> roadmr: yes, so you'd only want to do that in a Debugging type of situation on a deserted island
<roadmr> skay: my thought exactly... my concern was one of scalability, it would be easier to DoS the poor gunicorn
<catbus1> Hi, in several deployments, there are occasional hook failures, which can be resolved by retrying. Sometime if I went in to check the /var/log/juju/unit-UNITNAME.log, it's usually an error on running a command, and if I ran the command manually, it was success. For example, sudo keystone keystone-manage db_sync. Is it because when the hook script was run, perhaps mysql wasn't ready yet?
<coreycb> hey all, for the local provider, can I deploy machines to lxc and kvm containers on the host?  I have 'container: kvm' set and I'm under the impression that any lxc containers have to be inside a deployed kvm machine.
<dimitern> coreycb, the container type kvm or lxc defines what containers to use as "machines" in a local environment - either kvm or lxc (not both), however, you can still try to deploy units in lxc or kvm with --to lxc:<machine-id> or --to kvm:<machine-id> (argument to deploy)
<dimitern> coreycb, please note though, lxc-in-lxc or kvm-in-lxc might not work out of the box
<coreycb> dimitern, ok so just to verify, if container type is kvm then any lxc containers have to be nested in kvm?
<mmcc> hi folks, I'm having issues accessing an environment that was bootstrapped with JUJU_HOME set. is that well tested? details here: http://paste.ubuntu.com/9941188/
<mmcc> this is used in our openstack installer, so if it's not really supported well, we need to pull it out
<marcoceppi> mmcc: it is very well tested
<marcoceppi> mmcc: those errors are from juju version mismatches
<marcoceppi> the environment has 1.21.1 installed
<marcoceppi> which was released earlier today
<marcoceppi> you have 1.20.14
<marcoceppi> upgrade to 1.21 using ppa:juju/stable and you won't have any issues
<mmcc> marcoceppi: ok, so it sounds like what happened is that I used the 1.20.14 cli to bootstrap, and it installed 1.21.1 and wrote an environment file that the old version can't read? why would it do that?
<marcoceppi> mmcc: it sounds like the cloud installer installed and used the latest version of juju
<marcoceppi> which is different than what you have installed. The client should always bootstrap with the version of juju that is used to bootstrap it
<marcoceppi> either way, that's the issue, you can see that in the tree output of the tools directory
<mmcc> marcoceppi: ah, ok. I do see what's going on here. thanks for your help.
<mbruzek> Hello jose are you out there?
<flakrat> Howdy, I'm deploying my first compute node using "openstack-install" and after configuring the MAAS info the script goes into "Bootstrapping Juju...". The compute node boots, goes through a bunch of configuring the compute node, the juju process seems to stop advancing with a single ssh connection open from the controller to the compute node "ssh -i /root/.cloud-install/juju ...."
<flakrat> I've gone through the process of releasing the node and running through the whole process again, same results
<flakrat> where whould I look to find out what's causing the hang up
<flakrat> I think I found the issue. On the compute node in /var/log/cloud-init-output.log indicates that DNS isn't working on the node, can't reach streams.canonical.com
<arosales_> tvansteenburgh: does this look like an infrastructure failure to you http://reports.vapour.ws/charm-tests/charm-bundle-test-5021-results/charm/charm-testing-aws/2
<arosales_> "Resource Warning"
<arosales> tvansteenburgh: but consistantly happens across all our clouds
<tvansteenburgh> arosales: that can be ignored for now, it's a warning from a python lib. - no correlation with actual failures
<tvansteenburgh> the actual failure in that case is that the deployment didn't stand up in time
<arosales> ok, seems like the charm is valid and isn't candidate for removal from the store
<arosales> tvansteenburgh: thanks for taking alook
<arosales> mbruzek: https://bugs.launchpad.net/charms/+source/nagios/+bug/1403574
<mup> Bug #1403574: The nagios charm fails automated testing <audit> <auto-test> <nagios (Juju Charms Collection):Triaged> <https://launchpad.net/bugs/1403574>
<arosales> sounds like we need to fix this in the test
<arosales> tvansteenburgh: while I am bothering you do you know if this charm is an issue with the make testing disscussion http://reports.vapour.ws/charm-tests/charm-bundle-test-838-results/charm/charm-testing-hp/2
<tvansteenburgh> arosales: the test is trying to use an env var that's not set. it's up to the test to ensure that it is. so, it's a problem with the test
<arosales> tvansteenburgh: ack, and thanks for looking
<tvansteenburgh> arosales: certainly
<thumper> unit-wordpress-0[5078]: 2015-01-29 22:00:47 INFO unit.wordpress/0.install logger.go:40 W: Failed to fetch http://ppa.launchpad.net/charmers/charm-helpers/ubuntu/dists/trusty/main/binary-amd64/Packages  403  Forbidden
<thumper> lazyPower, marcoceppi: ping
<thumper> trying to install the wordpress trusty charm
<thumper> and getting install hook failed
<thumper> log sias ppa forbidden
<thumper> say what?
<thumper> lazyPower: you really there?
<lazyPower> thumper: marcoceppi is travelling and i managed to part when i was trying to respond
<thumper> kk
<lazyPower> can you re-state the issue for me?
<thumper> wordpress trusty charm install hook fails with apt ppa error
<thumper> unit-wordpress-0[5078]: 2015-01-29 22:00:47 INFO unit.wordpress/0.install logger.go:40 W: Failed to fetch http://ppa.launchpad.net/charmers/charm-helpers/ubuntu/dists/trusty/main/binary-amd64/Packages  403  Forbidden
<thumper> I just want a couple of charms to use in a demo
<thumper> and went for the old standby mysql/wordpress
<thumper> happy to use something else
<lazyPower> thumper: how impressive do you want it to look?
<thumper> doesn't have to be too impressive
<lazyPower> and re: mysql - ty for pointing that out, def worth a bug. seems like a PPA archive went missing on us
<thumper> just showing charms actually working
<lazyPower> http://fosdem.juju.solutions
<thumper> lazyPower: what I'm trying to demo is MESS
<thumper> lazyPower: so two different environments inside one state server
<thumper> each serving different things
<lazyPower> ah ok, so simple it is
<lazyPower> try the Elasticsearch/kibana bundle w/ a mediawiki/mysql bundle
<lazyPower> minimal on machines, and i've deployed both wihtin the last 48 hours with success
<lazyPower> http://imgur.com/CYPpORc
<lazyPower> is what i was about to set you up with
<thumper> lazyPower: that would be way more than I need :)
<thumper> lazyPower: forgive me for being dumb, but how do I deploy the bundles?
<lazyPower> https://jujucharms.com/mediawiki-single/7
<thumper> lazyPower: not sure that gui is going to work
<lazyPower> juju quickstart should
<lazyPower> right?
<thumper> um...
<thumper> I'm not sure
 * lazyPower rolls the dice
<lazyPower> lets find out!
<thumper> hmm... I don't have quickstart
<lazyPower> apt-get install juju-quickstart
<lazyPower> thumper: quickstart is just fetching the environment state server from the jenv, as i see it quickstart should give you some results. if quickstart doesnt, go to the next common denominator and try juju-deployer (apt-get install juju-deployer) and give that a run (juju deployer -c bundles.yaml, or pass it the store url and it *should just work*)
 * thumper tries
<thumper> $ juju switch another
<thumper> $ juju quickstart bundle:mediawiki/single
<thumper> well... it is doing something
 * lazyPower makes overly dramatic music while thumper waits to intensify the feeling of success 
<thumper> retrieving the environment status
<thumper> that is where it is stuck
<thumper> rick_h_: you around?
<lazyPower> thumper: is the env status coming back with different/new keys now?
<thumper> wat?
<lazyPower> afaics thats just the status return. as in the json dump of whats running
<thumper> I can do a status in another shell
<rick_h_> thumper: yep
<lazyPower> or what the state server knows about - to be more accurate
<thumper> and it return fine
<rick_h_> thumper: packing time
<thumper> rick_h_: got a few minutes to talk api connections and quickstart?
<thumper> lazyPower: quickstart is just hanging
<rick_h_> thumper: quick ones sure
<thumper> rick_h_: kk
<thumper> rick_h_: https://plus.google.com/hangouts/_/gtonfe6y7vs6is7p6npawkge3ea?hl=en
#juju 2015-01-30
<bloodice> I have different physical servers i want to use for specific packages.  Is there a way to link the MAAS name and setup to a specific machine in Juju?  This would allow me to allocate the charms to the appropriate servers.
<blahdeblah> bloodice: We use MAAS tags for that
<thumper> lazyPower: damn... elasticsearch is failing the install hook now
<thumper> apt.cache.FetchFailedException: W:Failed to fetch http://packages.elasticsearch.org/elasticsearch/1.2/debian/dists/stable/main/binary-amd64/Packages  403  Forbidden
<thumper> lazyPower: I hope it is intermittent and temporary
<jose> thumper: that looks like an error server-side, probably moved files or changed to https
<jose> not moved files, but changed paths
<lazyPower> https://askubuntu.com/questions/576500/juju-bundle-deploy-in-one-machine-in-lxc/579585#579585
<seal> lazyPower: o/
<lazyPower> o/ seal
<seal> is there a particular order in which to execute the flannel-docker?
<seal> I was unable to communictae with other host
<seal> will post the order which I ran it in a sec
<lazyPower> seal: the order dependencies were resolved in the latest iteration of the charm i do believe.
<seal> http://paste.ubuntu.com/9952491/
 * lazyPower looks
<lazyPower> hmm, that looks correct to me - can you get me the output of `juju run --service docker 'ifconfig -a'` ?
<lazyPower> sorry, docker-api, in your given example.
<lazyPower> hmm, that looks correct to me - can you get me the output of `juju run --service api-docker 'ifconfig -a'` ?
<seal> sorry this the correct code http://paste.ubuntu.com/9952527/
<lazyPower> so, you've got a single docker host, and you've made the connections to ETCD and flannel - which should have setup the docker0 bridge. This is why i want to take a peek at ifconfig -a to see whats going on with the docker0 bridge adapter.
<seal> ok one sec will set it up quickly and ping you if you are still around
<seal> destroyed the last environment :)
<lazyPower> ok, np seal
<rbasak> sinzui: src/bitbucket.org/kardianos/osext/LICENSE is wrong. It's been fixed upstream though.
<sinzui> :/
<rbasak> sinzui: can you arrange to pull the latest in please, assuming that doesn't break any dependants?
<sinzui> I will
<rbasak> sinzui: it might be worth instituting a licence review policy before permitting changes to dependencies.tsv.
<rbasak> Thanks
<sinzui> rbasak, I think it will be after the fact, engineers change deps all the time. I think we need a report of what change for review every week
<rbasak> sinzui: OK
<rbasak> sinzui: this isn't a blocker. Logically I think it's acceptable as it's fixed upstrema, and I've written a note accordingly.
<sinzui> rbasak, I reported bug 1416425 and am handing it off to developers
<mup> Bug #1416425: src/bitbucket.org/kardianos/osext/LICENSE is wrong <licensing> <packaging> <juju-core:Triaged> <juju-core 1.21:Triaged> <juju-core 1.22:Triaged> <https://launchpad.net/bugs/1416425>
<rbasak> sinzui: thanks. Next...
<rbasak> sinzui: why has juju/cmd changed license?
<rbasak> sinzui: I don't mind particularly, but I do have to update the copyright file to follow all these changes, and it's not optimal for me to discover these all in the diffs.
<rbasak> It's driving me up the wall a little right now.
<sinzui> rbasak, I don't know about the change, sorry. I will ask
<rbasak> sinzui: it's not so much that it changed, but more that it feels like these changes are almost slipping past me. We should sort out a process so that I can follow licensing issues and changes more easily.
<sinzui> rbasak, Noted. I will bring this up as an issue in the release meeting in 2 hours. We will need a policy to catch and review licensing changes before we *consider* a release
<rbasak> Thanks.
<rbasak> sinzui: github.com/juju/syslog has an incorrect LICENSE file.
<rbasak> Google's third clause has been removed.
<sinzui> :(
<rbasak> Not fixed upstream. Can I leave this one with you? I think if it's fixed upstream it's OK to take it.
<rbasak> (without updating it, for now)
<sinzui> rbasak, I am confused https://github.com/juju/syslog/blob/master/LICENSE shows me a bsd 2-clause license. How does this relate to google?
<rbasak> sinzui: right, but for example https://github.com/juju/syslog/blob/master/syslog.go refers to the Go LICENSE which hasn't been shipped.
<sinzui> ah
<sinzui> thank you rbasak
<rbasak> sinzui: it seems that the author derived from Go, but just included some arbitrary LICENSE file instead of the Go LICENSE file.
<rbasak> No problem.
<rbasak> I'll keep slogging through.
<rbasak> sinzui: next question. https://github.com/juju/testing/issues/32 hasn't been addressed. Is this reported to the right place?
<sinzui> rbasak, possibly not. I report bugs against https://bugs.launchpad.net/juju-core when I want core engineers to fix the issue. I don't think the team reads github issues unless pressed
<skay> we are sedding some juju status output to get the instance id, and I was wondering if there is a nicer way to do that? juju status <service> <field> doesn't exist but would it be nice?
<rbasak> OK, I'll copy it over.
<rbasak> Copied to https://bugs.launchpad.net/juju-core/+bug/1416430, thanks.
<mup> Bug #1416430: Some files refer to an include license file that is not included <licensing> <juju-core:New> <https://launchpad.net/bugs/1416430>
<rbasak> sinzui: next issue. juju/utils introduced a bunch of files claiming AGPLv3, refering to a LICENSE file, when all other files are LGPLv3 and so is the LICENSE file.
<rbasak> Should these be LGPLv3?
<sinzui> I don't know, but I agree that LGPLv3 is right.
<sinzui> rbasak, I suspect someone moved code and didn't check the license when they moved it
<rbasak> Yeah that sounds likely.
<rbasak> I'm going to stop now. I'll carry on later. Probably next week.
 * skay discovers --json and is much happier
 * skay means --format not --json
<sinzui> rbasak, I will report a bug about the utils issue. For each licensing bug, I have target 1.23, 1.22, and 1.21, and we can see people are working on them https://bugs.launchpad.net/juju-core/+milestone/1.21.2
<jamespage> rick_h_, hey - is there a way to use juju-gui in a do what I tell you mode
<jamespage> updating bundles that use lxc containers etc... is a pita right now as I appear to have to use  MAAS environment todo the updates
<rbasak> sinzui: thanks
<rick_h_> jamespage: not following. You mean without the commit stage?
<jamespage> rick_h_, I really just want a demo environment where I can tweak things
<jamespage> rick_h_, but the demo online appears to ignore machine placements
<rick_h_> jamespage: you mean have it work like the demo url?
<jamespage> rick_h_, yup
<rick_h_> jamespage: oh, hmmm, how are you defining machines? it should support what it exports.
<rick_h_> jamespage: so to put it in the same mode there's a sandbox setting on the gui to set
<rick_h_> jamespage: looking for exact config attr
<rick_h_> jamespage: juju set juju-gui "sandbox=true"
<jamespage> rick_h_, this bundle - https://jujucharms.com/u/openstack-charmers/openstack/
<rick_h_> jamespage: so few things. One, that sandbox setting should get you into a 'demo mode' and the bundle.yaml you dump out from the gui should reimport with drag/drop.
<rick_h_> jamespage: looking your bundle doesn't show in the old api so not sure what's up. looking into it
<aisrael> Is there a safe way to compact juju's monodb database?
<rick_h_> jamespage: loading up for flight but if you need anything hit up hatch or frankban and they'll help out.
<jamespage> hatch, frankban: when I import/export that bundle through a gui in sandbox mode, I lose the machine placement - any ideas?
<hatch> jamespage: sorry that's not yet implemented
<jamespage> hatch, its not implemented in the sandbox mode?
<frankban> jamespage: the GUI does not support machine placement in bundles yet
<jamespage> frankban, really?
<hatch> frankban:  I thought that was only in sandbox mode
<jamespage> frankban, but I created this bundle on a MAAS deployment using juju-gui ?
<frankban> hatch: oh, so we have machine placement (as defined by the new bundle spec) in real env in the GUI?
<hatch> hmm - I'm not actually confident on that now :)
<hatch> let me look at the source
<frankban> hatch: thank you
<hatch> frankban: looks like it just passes the bundle file through so as long as the deployer supports it...
<hatch> jamespage: is it not working for you in a real env or just in sandbox mode?
<jamespage> hatch, it worked ok on a MAAS environment (import/export)
<jamespage> hatch, but that's a pita for doing a small change to a bundle
<hatch> that's a good point - you are able to modify the file manually though
<jamespage> hatch, but not for sandbox - as soon as I import the bundle it looses its placement data
<jamespage> hatch, I was able to modify the file manually
<hatch> yeah sandbox is definitely not supported for those machines
<hatch> er machine placement
<hazmat> hatch, gui doesn't export placement data?
<hatch> hazmat: it doesn't support placement data on import in sandbox mode. I'm just tracing the export code now (it's been a bit since I've been in there)
<hatch> hazmat: if I'm reading the code correctly it should be exporting machine placement data as well
<hatch> jamespage: so you would like to import a bundle file into sandbox so that you can modify it without hacking on the yaml file directly?
<jamespage> hatch, +1 yes please
<hatch> jamespage: would you be able to create a bug to that effect?
<jamespage> hatch, I can do that
<hatch> jamespage: thanks!
<hazmat> hatch, its annotating x/y gui placement, but machine placement?
<hatch> I don't understand the question
<hazmat> hatch, its not exporting unit to machine placement, just service x/y on canvas
<hatch> hazmat: ok I'll have to look into that because according to the export code it should be exporting the machine placement details
<hazmat> hatch, not sure how, unless its storing placement as annotation, that info is loss and would have to be inferred. anyways, not seeing that on demo.jujucahrms.com
<rick_h_> hazmat: jamespage we decided not to implement deployer logic in JS for importing with placement hoping we were going to get core support and skip that.
<hatch> hazmat: https://github.com/juju/juju-gui/blob/develop/app/models/models.js#L2320
<rick_h_> hazmat: jamespage but as priorities go that's a decision from last sprint we should look at again as it's something we should give users
<jamespage> export is ok
<rick_h_> jamespage: correct, and atm there's a bug in export hatch is looking at. I think we missed an update as we've upgraded to the new charmstore api and that should be a quick fix hopefully.
<hazmat> hatch, interesting, might be a bug, that looks like its trying to infer it, but its not triggering on demo site in sandbox
<hatch> hazmat: ok I'll be sure to check that out - I'm now fixing an export bug which was just discovered
<bloodice> I have different physical servers i want to use for specific packages.  Is there a way to link the MAAS name and setup to a specific machine in Juju?  This would allow me to allocate the charms to the appropriate servers.
<bloodice> I have different physical servers i want to use for specific packages.  Is there a way to link the MAAS name and setup to a specific machine in Juju?  This would allow me to allocate the charms to the appropriate servers.
<stokachu> bloodice: --to option
<bloodice> That lets me select the machine, but there doesnt seem to be a way for me tell juju when i create a machine which server to take from MAAS
<stokachu> bloodice: when you juju bootstrap?
<bloodice> well, when i bootstrapped, it just grabbed a server
<stokachu> bloodice: yep do juju bootstrap --to <maas-hostname>
<bloodice> but after the gui is up and running, there doesnt seem to be a way to say "i want this specific server in MAAS"
<stokachu> not sure about the gui, but cli you use --to <hostname>
<bloodice> so after Juju is online, i can go into the console and create a machine with the --to command using the maas hostname?
<stokachu> bloodice: yep
<bloodice> hrm
<bloodice> i ran this command "juju deploy --to c7yra.maas mysql" and i get: invalid --to parameter "c7yra.maas"
<bloodice> i tried "juju add-machine --to c7yra.maas" and got: error: flag provided but not defined: --to
<bloodice> ack, ok, i got it to work, but i really dont like it...  i needed to use the tag feature, so "juju add-machine --constraints tags=c7yra.mass"
<bloodice> it just adds a machine with a number, but i have no way of seeing that 8 is server c7yra.mass in the gui.  Status does show the server name though.
<bloodice> welp, that doesnt work anyhow... status says:  ({"tags": ["No such tag(s): ''c7yra.maas''."]})'
<heartones> can any one help with JuJu installation I am getting not able to connect: Permission denied (publickey,password).
<sebas5384> just passing to drop something I did the other night for Juju + Vagrant https://github.com/sebas5384/vju
#juju 2015-01-31
<eagles0513875_> hey guys is anyone in here?
<eagles0513875_> hey guys i need some help im trying to provision charms through the manual provider and for some reason I am not able to as i am getting some errors when i do juju bootstrap
<eagles0513875_> im doing this on a windows machine
<heartones> hello charmers any one feels like troubleshooting juju new installation with me
<heartones> Attempting to connect to 38wpm.maas:22
<heartones> Attempting to connect to 38wpm.maas:22
<heartones> Attempting to connect to 192.168.178.103:22
<heartones> permission denied public key
<eagles0513875_> i need some help as well with manual provider anyone around that could provide some insight into some issues im having
<eagles0513875_> hey dimitern
<dimitern> eagles0513875_, hey
<heartones> sure can you ask me since no one is around
<heartones> can you tell me what's your setup
<heartones> I have two hp proliant servers connected to unmanaged switched that is integrated into my Kable router and playing with maas/juju/openstack
<heartones> I am going to do manual provisioning now I'll let you know if go pass it
<eagles0513875_> heartones: what version of juju do you have
<eagles0513875_> im using juju on windows with 1.20.13
<eagles0513875_> and it seems like there is a bug
<eagles0513875_> ERROR initializing ubuntu user: ssh: handshake failed: ssh: unable to authentica te, attempted methods [none publickey], no supported methods remain ERROR there was an issue examining the environment: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods  remain
<eagles0513875_> and the server isnt using ssh keys as its a clean installation of ubuntu
<eagles0513875_> all i did was setup a user for me and disabled root
<eagles0513875_> user has root access through sudo and is in the sudoers file
<eagles0513875_> heartones: are you on linux?
<heartones> sorry I was away
<heartones> can you tell me what is your goal first are you trying to install what combination of juju/maas/openstack
<heartones> and I need to know your network topology as well how many NIC cards are connected to what switch and router this will make life easy to know this in advance
<heartones> cause juju and maas are a little bit picky :)
<heartones> I can also log in remotely if needed to help you with things using putty ssh tunneling
<heartones> and don't forget the parsing of python is really picky too :) in your yaml file
<heartones> I can help with openstack and maas, but still playing with juju to get it to bootstrap my other node
<lazyPower> eagles0513875_: what command did you use to attempt to bootstrap the node? just juju bootstrap with your manual provider configured in environments.yaml?
<lazyPower> eagles0513875_: as i undertand it (i haven't run the windows juju agent in quite some time - i'll have to circle back with a vm image at some point in the near future and re-familiiarize myself) - even if it doesn't support key based auth, it should still prompt you for the password if password login is still enabled
<lazyPower> eagles0513875_: also, if you pingme back ad 'm unresponsive, i'm attendin FOSDEM - so i might be a bit to reply, apologies in advance
<heartones> 'juju bootstrap --debug
<heartones> no proplem get your time I am working here on my servers
<eagles0513875> hey all
<bloodice> why did they remove the maas-name constraint???????
#juju 2015-02-01
<eagles0513875> lazyPower_: it seems like its a bug in the 1.20.13 version for windows installed it on one of my macs the 1.21.1 version and it works like a charm im able to boot strap
<Guest90353> hi i have been testing openstack using juju and maas
<Guest90353> i am successfully able to deploy openstack without quantam gateway?
<Guest90353> how ever when i add the quantam gateway charm the networking service doesnt show up in the openstack
<Guest90353> even after adding the charm relations
<Guest90353> plz help
<Guest90353> the dashboard doesnt has the network tab , i did try rebooting the whole setup
<eagles0513875_> hey
<eagles0513875_> i have a question i deployed the juju gui manually to a remote server I have and for some reason when i issue juju status its still showing pending. also how can i access the gui?
<eagles0513875_> is anyone around in here?
<Guest90353> eagles0513875: checkout the juju debug-log command
<eagles0513875_> hey guys
<marcoceppi> eagles0513875_: are you sytill having issues?
<eagles0513875_> marcoceppi: yes
<eagles0513875_> and im not sure where the issue is
<eagles0513875_> marcoceppi: i deployed juju gui and its still showing its deployment as pending
<eagles0513875_> marcoceppi: i deployed this using manual provisioning and juju on mac osx
<eagles0513875_> marcoceppi: any ideas?
#juju 2016-02-01
<yuanyou>  Hi,all: juju nodes always "Waiting for agent initialization to finish", that's why?  http://paste.ubuntu.com/14848082/
<yuanyou>  WARNING juju.apiserver.client status.go:677 error fetching public address: public no address
<yuanyou> Hi all: when deploy openstack with juju , some nodes can't fetch public address?
<bdx> lazy
<bdx> lazyPower: you getting this?
<bdx> ^vault stuff
<deanman> Anyone using the juju vagrant images provided by ubuntu? I'm having trouble accessing the juju-gui as mentioned in the documentations
<BrunoR> deanman: what kind of trouble?
<deanman> BrunoR: Once the VM is up, there isn't any service providing the web console for Juju as described in docs
<pmatulis> deanman: kindly file a juju docs bug if you feel there is something wrong with the docs
<pmatulis> https://github.com/juju/docs/issues/new
<deanman> There is something wrong with the image altogether, not the docs. The latest wily 64 image will simply not run without errors from start to end.
<deanman> pmatulis: Where would be the best place to report a bug regarding juju vagrant images?
<pmatulis> deanman: i would start here. someone will push it along if it's the wrong place:
<pmatulis> https://bugs.launchpad.net/juju-core/+filebug
<deanman> if its launchpad then i have found this https://bugs.launchpad.net/juju-vagrant-images but not sure if its updated
<pmatulis> deanman: looks good to me
<Shruthima> Hi All, Iam Shruthima working on IBM Installation Manager product , which is useful for installing many IBM products. I will be developing these charm from layers and I explored to make use of basic layer which is present in â http://interfaces.juju.solutionsâ  but not sure about itâ¦!!  Could you please guide me which layer and interface I can use for charming IBM-IM product ?
<nagyz> is there a way to add a charm to reference an already deployed, non-maas service?
<nagyz> let's say I already have ceph up and running, but I want my cinder to be configured for it
<pmatulis> Shruthima: jcastro is your guy but he's not here presently
<Shruthima> ok when he will be available?
<pmatulis> Shruthima: don't know. but he's usually always here
<Shruthima> thanks pmatuils will wait for his reply
<BrunoR> pmatulis: i can confirm the issues reported by deanman. current wily vagrant image produces and 'command not found' in /tmp/jujuredir/setup-juju.sh, writing ticket
<BrunoR> pmatulis: https://bugs.launchpad.net/juju-vagrant-images/+bug/1540471
<mup> Bug #1540471: Juju vagrant box failed to start (wily) <Juju Vagrant images:New> <https://launchpad.net/bugs/1540471>
#juju 2016-02-02
<cloudgur_> question .. what is the best way change the default charm logo when using the layer-docker model ?
<cloudgur_> I've updated the config.yaml for logo as normal but the dependencies keep pulling in the docker logo as icon.svg in the charm root path after build
<apuimedo> jamespage: thanks for the neutron-api merge
<jamespage> apuimedo, np
<jamespage> I also provided some feedback on midonet-api on the tracking bug
<jamespage> https://bugs.launchpad.net/charms/+bug/1453678
<mup> Bug #1453678: New charms: midonet-host-agent, midonet-agnet, midonet-api <Juju Charms Collection:In Progress by james-page> <https://launchpad.net/bugs/1453678>
<jamespage> I can deploy OK, but lint, unit and amulet tests a throwing errors atm
<apuimedo> thanks, I'll check that
<apuimedo> jamespage: any eta on the nova-cloud-controller review?
<BrunoR> how is juju-storage expressed in a bundle?
<deanman> Where are the vagrant juju images build? Where can i have a look at the source code of the vagrant box ?
<BrunoR> deanman: https://launchpad.net/vmbuilder https://code.launchpad.net/~ubuntu-on-ec2
<firl> lazypower|summit: any idea on timing that you might want to sync up?
<BrunoR> anyone? how is juju-storage expressed in a bundle?
<deanman> BrunoR: Can't seem to be able to find any vagrant reference in that URL.
<BrunoR> deanman: http://bazaar.launchpad.net/~utlemming/livecd-rootfs/master/view/head:/live-build/ubuntu-cpc/hooks/042-vagrant.binary
<BrunoR> deanman: and yes, this sayes nothing about Juju. That Vagrantfile must be somewhere else.
<nagyz> jamespage, is there a way to integrate an existing ceph cluster with a new openstack deployment using juju? :-)
<nagyz> being the ceph charm author you might know.
#juju 2016-02-03
<yuanyou> jamespage:  Hiï¼ I want to get a config option value from another charm ,I use config('ext-port') but can't the value from the charm neutron-gateway? How can I get this value "ext-port" from neutron-gateway? Thanks
<jamespage> yuanyou, hey - config is always scoped to a specific charm - its possible to distribute that across relations between charms, but you'd have todo that explicitly - what's your use case?
<nagyz> jamespage, did you see my question from yesterday? I might have missed your reply as my client got disconnected apparently for a bit
<deanman> Hello trying to run juju inside an ubuntu wily64 VM behind proxy
<deanman> I've configured proxy inside environment.yam and when trying to use the local provider to deploy a simple reds service i get an error  "DEBUG httpbakery client.go:226 } -> error <nil>". Any hints?
<jamespage> nagyz, hey - my irc was on and off whilst travelling - ask me again :-)
<apuimedo|away> jamespage: are you back from the travel?
<jamespage> apuimedo|away, I am yes
<apuimedo|away> :-)
<apuimedo|away> jamespage: https://code.launchpad.net/~celebdor/charms/trusty/nova-cloud-controller/liberty/+merge/283709 reminder :P
<jamespage> apuimedo|away, looking
<jamespage> apuimedo|away, did you see my comments in midonet-api on the review bug?
<apuimedo|away> not yet, I'll check it after I finish the current meeting
<jamespage> apuimedo|away, oh - right - that merge includes shared-secret...
<nagyz> jamespage, I'd like to use juju to deploy openstack but I already have a ceph cluster up and running which was not done using juju - is there a way to deploy the ceph charms and point them to the existing installation?
<jamespage> nagyz, no - sorry - that's not possible
<nagyz> I guess it's also not possible to deploy a new availability zone but instead of deploying keystone and horizon point them to an already existing one?
<jamespage> its a popular request and its probably not that much work to figure out a proxy charm that implements the client interface of the ceph charm
<jamespage> but its not been written by anyone yet...
<jamespage> nagyz, az in which context? nova?
<nagyz> yeah
<jamespage> its possible todo multiple regions with the charm with a single keystone and horizon
<jamespage> az is adjunct to that concept - its just a grouping of servers within a region...
<nagyz> what I meant is that we already have openstack up and running and we want to add a new az that I wanted to deploy using juju
<jamespage> nagyz, do you mean region or az?
<nagyz> ah, sorry, right, I meant region.
<nagyz> the only shared components between regions are keystone and horizon, right?
<nagyz> so is it possible to deploy everything except keystone and horizon with juju, and for those just point them to the existing installation?
<jamespage> nagyz, ok - so all of the charms have a 'region' setting; you have to deploy new instances of all charms with the region set to the new region name - horizon just relates to keystone so that should just dtrt; keystone you can specify multiple regions
<jamespage> nagyz, oh right - so the proxy question again in a different context - no that's not possible
<nagyz> right, but when I add a keystone relation so the different charms can register in the endpoints, they need to proxy
<nagyz> ah, right.
<nagyz> ok, so juju is only for greenfield deployments, I see :)
<jamespage> nagyz, yes
<jamespage> sorry
<jamespage> not a retro fit...
<nagyz> so I'd need to figure out how to migrate the current data over...
<nagyz> which is not going to happen on the ceph side (100TB+)
<nagyz> so there goes juju for me I assume :( :)
<jamespage> nagyz, yah - that's quite a bit of data...
<jamespage> nagyz, lemme check on the proxy charm thing
<nagyz> the new one we're about to deploy is ~2PB which I expect the pesky users to fill up quickly :-)
<jamespage> icey, cholcombe: ^^ I know we discussed proxying an existing ceph deployment into a juju deployed openstack cloud - have either of you done any work in that area?
<nagyz> or I guess we could write the proxy charm ourselves.
<jamespage> nagyz, that's def possible
<icey> jamespage nagyz: I haven't done any work on that yet
<jamespage> nagyz, we'd love to support you in that effort if you decide to go that route...
<nagyz> seems like every project I touch I end up writing code for - same happened to OpenStack itself.
<jamespage> nagyz, welcome to open source!
<nagyz> hah
<nagyz> with the current ceph charms is it possible to deploy the mons separately?
<jamespage> nagyz, yes
<jamespage> infact we have a bit of a redesign in flight for that
<nagyz> I know there is ceph-osd which only adds osds to existing clusters but looked to me like the ceph charm installs both the mon and the osd code
<jamespage> nagyz, the ceph charm is a superset of ceph-osd - but you can run it with no osd-devices configuration, so it just does the mon
<nagyz> and sets it up
<nagyz> ah, got it
<nagyz> and then just keep adding ceph-osds
<jamespage> nagyz, icey has been working on a new ceph-mon charm, chopping out the osd support from ceph and simplifying the charm
<nagyz> are you aware that if I deploy using 'osd-devices: a' and then change it to 'osd-devices: a b' then it doesn't work? :-)
<nagyz> it wants to re-run ceph-deploy on a, which fails
<jamespage> nagyz, osd-devices is quite blunt
<nagyz> so I cannot add drives inside an osd once deployed?
<jamespage> nagyz, oh wait - that should not happen - the charm should detect that a is in use and just skip it
<jamespage> nagyz, please file a bug for that
<jamespage> def a regression if that is the case...
<nagyz> ok, in my -limited- testing, this didn't work. I'll retest and see
 * jamespage wonders if icey broke my code...
<nagyz> for quick setup and teardown I'm using juju now to test stuff
<icey> jamespage: I don't think so...
<nagyz> but ofcourse juju itself has problems with my bonded maas 1.9 network
<jamespage> icey, just kidding ;-)
<nagyz> 50% of deployments fail
<icey> yeah yeah
<nagyz> thanks jamespage icey for the info
<jamespage> nagyz, oh that sounds nasty - again if you're hitting specific issues please raise bugs on juju as well - you sound like you're right on the edge of feature support and we're working towards 16.04 in three months...
<jamespage> so any feedback on latest juju + maas 1.9 with network bonding vlans etc... is super useful right now
<nagyz> yeah we're going to stick to 14.04 for the next 6+ after .04 is released tho :P
<nagyz> right I've been opening maas bugs left and right
<jamespage> nagyz, ack - you and alot of people
<jamespage> nagyz, i think most uses take the new lts at release into testing and then go to production in that type of timescale...
<nagyz> agreed - we'll do the same
<jamespage> testtesttesttesttest
 * jamespage apologies for the mantra...
<jamespage> nagyz, if you have other question either ask here or on the juju ML
<nagyz> will do
<jamespage> I try to watch both ;-)
<jamespage> apuimedo|away, hey - so I have most of midonet deployed apart from the host-agents bit
<jamespage> apuimedo|away, I was surprised that my instance got a dhcp IP address even without that - does midolman do something clever on the compute node?
<apuimedo|away> jamespage: you mean apart of the neutron-agents-midonet?
<jamespage> apuimedo|away, yeah
<apuimedo> jamespage: MidoNet dhcp driver only does a noop and sets a namespace for the metadata driver
<apuimedo> jamespage: MidoNet serves dhcp itself :P
<jamespage> apuimedo, ah!
<apuimedo> it's much comfier
<jamespage> so that comes from midonet-api or midolman?
<nagyz> one more question on the network side now that you guys are talking about it: let's say I have eth0 deployed on 10.1.0.0/16 and eth1 deployed on 10.2.0.0/16 - is it possible to tell juju to use one subnet for exposing networks and the other just for spinning up containers for example?
<apuimedo> and with the new release, we also provide metadata from the agent
<apuimedo> midolman
<nagyz> is there a juju network doc that I could read about the maas integration?
<jamespage> nagyz, not quite...
<jamespage> its coming with juju 2.0
<apuimedo> but juju will still take a while to catch up with the new MidoNet release
<jamespage> 'network spaces'
<jamespage> lemme dig you out a reference
<jamespage> apuimedo, hmm ok
<nagyz> jamespage, maas introduced fabric and space but it's very confusing even for someone with good network experience
<apuimedo> jamespage: ryotagami will be the one adding the v5 support
<nagyz> jamespage, the wording is not very exact in the maas docs
<jamespage> nagyz, I'll provide them that feedback
<jamespage> hmm - none in channel
<jamespage> nagyz, fabric
<jamespage> nagyz, sorry - I know there are some documentation updates in flight - MAAS 1.9 is still fairly new
<jamespage> nagyz, broadly
<nagyz> I guess one is L2 the other is L3 separation
<jamespage> subnet >---- space
<jamespage> so multiple subnets in a space
<jamespage> a space is a collection of subnets (l2 network segments) with equivalent security profile
<apuimedo> jamespage: which public-address will get you then?
<jamespage> so thing DMZ, APPS, PCI for space
<apuimedo> and which private-address
<jamespage> apuimedo, oh - not got that far yet :-)
<apuimedo> in hookenv
<apuimedo> jamespage: I was thinking hard about that problem before going to bed
<jamespage> sorry to many conversations
 * jamespage reads backscroll
<apuimedo> jamespage: I was just joining your conversation with nagyz
<jamespage> apuimedo, well private-address almost becomes obsolete
<apuimedo> because it will prolly affect openstack charms deployment
<jamespage> it still exists as really the IP address on the network with the default route
<apuimedo> you'll probably have a management network
<jamespage> apuimedo, oh it will
<apuimedo> and a data network
<jamespage> apuimedo, most charms support that with congig
<nagyz> right
<apuimedo> so if I want to future proof
<jamespage> apuimedo, we're working on network space support across the charms in the run up to 16.04
<apuimedo> I need to get from my charms the ip on a specific maas network
<nagyz> so how is it currently done? can I declare the different subnets and have MAAS manage DHCP+DNS on them and juju use it?
<jamespage> when that feature GA's in juju
<jamespage> apuimedo, ok - so there will be some additional hook tooling for this
<apuimedo> nagyz: so does maas already provide dnsmasq for more than a net?
<jamespage> apuimedo, and probably some extra metadata stanza's - thats still in flux
<apuimedo> jamespage: good. I'll be looking forward to see it then
<nagyz> apuimedo, good question actually - it still could be buggy.
<apuimedo> thanks
<nagyz> apuimedo, even with one subnet, maas dns is broken for me
<nagyz> (I promised to open bugs)
<apuimedo> nagyz: in which way?
<jamespage> apuimedo, basically its 'network-get -r <binding> --primary-addres'
<jamespage> thing binding == management network or data network for tenant traffic
<apuimedo> well, alai did mention yesterday that on some lxc containers she was not getting "search " in /etc/resolv.conf
<nagyz> when doing the initial enlistment the node gets a DHCP IP which is registered to DNS, but then when a new node wants to enlist it gets the same IP which already has a DNS record so it gets back a bad request for the enlistment rest call
<jamespage> but services are bound to spaces, so if units end up on different subnets within a space, they still get the right IP for the local subnet
<nagyz> which breaks enlistment so breaks maas totally
<nagyz> so the DHCP record <> DNS record sync is quite flaky
<apuimedo> nagyz: that sounds very strange. It would be more of a dnsmasq bug, and those don't come often
<nagyz> why would it be dnsmasq?
<apuimedo> nagyz: I got once the setting screwed up
<jamespage> nagyz, hmm - that sounds like a bug or some sort of configuration issue - I've not seen that problem in our lab
<apuimedo> so I had to destroy the environment and recreate it
<jamespage> apuimedo, its isc-dhcp server in MAAS (not dnsmasq)
<nagyz> I'd LOVE to use maas's built in DNS instead of writing a manual designate syncer script
<nagyz> I've found an isc-dhcp-server bug when using multiple subnets in trusty...
<apuimedo> oh, then I misremember badly
<nagyz> it's pending confirmed, triaged
<nagyz> so I'm down to one subnet.
<apuimedo> I had such a funny one. I did juju ssh foo
<apuimedo> and juju was taking me to the `bar` machine
<nagyz> but even then if I right now flip it from DHCP to DHCP+DNS, it just kills my environment fully - no more commissioning or deploy possible
<nagyz> and I don't want to reenlist the 100+ nodes :)
<nagyz> (vm snapshots ftw)
<jamespage> nagyz, suffice to say lots of improvements in managing DNS across multiple subnets using MAAS - if there are isc-dhcp bugs, we're fortunate that there is lots of expertise both in the MAAS team and across the ubuntu distro to resolve that...
<nagyz> jamespage, right I think the bug only affects the version shipped in trusty
<nagyz> I can dig out the bug if you want. :)
<nagyz> https://bugs.launchpad.net/maas/+bug/1521618
<mup> Bug #1521618: wrong subnet in DHCP answer when multiple networks are present <MAAS:Triaged> <isc-dhcp (Ubuntu):Confirmed> <https://launchpad.net/bugs/1521618>
<nagyz> here it is
<nagyz> basically I can't even do PXE boot as it mixes up the subnet infos
<jamespage> nagyz, please raise bugs and mail the juju and MAAS mailing lists with any probs - feedback is a hugely valuable part of our development process.
<nagyz> a lot of times I open bugs then I get an invalid and blake explains to me that it's not the right way to hold it ;-)
<jamespage> 'effects me to' with a comment always good to help with priorities on existing bugs....
<jamespage> nagyz, hehe - ok - sometimes a post to the ML first is a nice idea
<nagyz> is the ML active for maas?
<nagyz> I saw the dev ML archive that had like 50 mails in a year
<jamespage> nagyz, but I'd always rather have the bug and explain that no
<jamespage> nagyz, meh - just email on the juju one - that has the right eyes on it
<nagyz> :D
<nagyz> lol
<jamespage> no/not
<nagyz> so would doing a keystone proxy charm be considerably harder than the ceph proxy?
<nagyz> (is "proxy charm" an official term?)
<jamespage> nagyz, well proxy charm is in my head really ;-)
<nagyz> found some google reference to it, might have been from you
<nagyz> but I think it makes a lot of sense
<jamespage> nagyz, but that's what its doing - implementing the required interfaces, but just proxying out to a different backend service.
<nagyz> although our corporate overlords needs to give me an OK before I could opensource any of the code I write which takes months...
<jamespage> rather than running them itself...
<nagyz> it can skip all installation phases and just has to implement the "answer" part of a relationship I guess
<nagyz> (haven't looked at juju internals)
<jamespage> nagyz, basically yes
<nagyz> one last question before I leave to grab a bite
<jamespage> nagyz, ok
<nagyz> let's say I deploy ubuntu now on trusty from the liberty cloud archive
<jamespage> ack
<nagyz> once mitaka comes out, how would the upgrade look like using juju?
<jamespage> nagyz, oh right
<jamespage> two options
<jamespage> either - "all units at once"
<jamespage> juju set glance openstack-origin=cloud:trusty-mitaka
<jamespage> or
<jamespage> juju set glance openstack-origin=cloud:trusty-mitaka action-managed-upgrade=True
<jamespage> and then do
<jamespage> juju action do glance/0 openstack-upgrade
<nagyz> just changing the origin wouldn't actually have the charm change the config options (if it got renamed from a to b, for example)
<nagyz> ah
<nagyz> that's the openstack-upgrade part :)
<nagyz> so I can right now test this going from kilo to liberty for example, right?
<nagyz> should that work?
<jamespage> yes
<jamespage> we've had this is folsom :-)
<nagyz> cool
<nagyz> I'd like to like juju.
<nagyz> help me like it.
<nagyz> :)
<jamespage> nagyz, look for 'openstack-origin' for that behaviour
<nagyz> so I could just go, deploy icehouse without the origin parameter, add the parameter to kilo, upgrade, change it to liberty, upgrade...
<nagyz> and shouldn't break a thing?
<jamespage> 'source' is less well specificied - some charms will do an upgrade (rabbitmq-server) - some won't (ceph)
<jamespage> nagyz, that's the idea and what we test yes
<jamespage> nagyz, let me dig you out a link
<nagyz> are these covered in internal CD/CI tests?
<jamespage> nagyz, https://www.openstack.org/summit/tokyo-2015/videos/presentation/canonical-amazing-operations-201-in-place-upgrades-of-openstack-with-running-workloads
<jamespage> nagyz, yes
<nagyz> ah cool will watch later
<jamespage> we also re-verify before every charm release (3 month cadence)
<nagyz> I was actually in tokyo but missed this
<jamespage> nagyz, wolsen is awesome...
<jamespage> nagyz, watch out for godzilla ;)
<nagyz> the food in austin will be better
<nagyz> :)
<jamespage> hah
<jamespage> nagyz, I'm hoping to have our charm development upstream under openstack soon
<jamespage> nagyz, so we might get some design session time for discussing roadmap for the next 6 months for the charms
<nagyz> you mean under openstack's github?
<jamespage> nagyz, yeah - git/gerrit workflow...
<nagyz> that would actually mean I could contribute code without doing any legal paperwork
<nagyz> as we're cleared for all-openstack stuff
<jamespage> nagyz, better make that happen quick then :)
<nagyz> I need to have this region up by the end of the month
<jamespage> nagyz, we sprinted on making the transition smooth last week which was really the final blocker; I just need to complete the review for the infra team and make sure everything is ready to go
<jamespage> and all devs know where to go once we migrate!
<nagyz> cool
<nagyz> looking forward to that
<jamespage> yeah me to
<nagyz> we need to do the stress test on our ceph first but I can get someone on my team to look into charm internals to assess the proxy
<nagyz> so if no work is done yet, we have a fresh start
<nagyz> ok, really off to grab a bite - talk to you later and thanks for all the infos!
<jamespage> nagyz, tbh you can do your development where you like
<jamespage> nagyz, ack -  ttfn
<jamespage> apuimedo, right back to reviewing midonet
<apuimedo> very well
<apuimedo> jamespage: I'm doing a bugfix for midonet-api midonet-agent interaction
<jamespage> apuimedo, ok
<apuimedo> because the juju-info relation is giving me hostname.domain
<apuimedo> and the matching was on just hostname
<jamespage> apuimedo, is neutron-agents-midonet usable yet?
<apuimedo> so if you have a domain and dns configured maas, that is problematic
<jamespage> apuimedo, oh I'm testing on my laptop under LXD
<apuimedo> jamespage: the one that runs on bare-metal and then neutron-api goes inside lxc? Should be
<jamespage> its quicker for reviews....
<apuimedo> jamespage: so no dns in your setup, right?
<jamespage> nope - everything in LXD containers - its work inflight right now...
<jamespage> apuimedo, simple dns
<jamespage> ip/host forward reverse lookup only
<apuimedo> ok
<apuimedo> well, it should probably work then
<apuimedo> I haven't tried the lxd provider, do you have a link to it?
<jamespage> apuimedo, midonet-api and midonet-agent are working ok
<apuimedo> just to verify, jamespage
<jamespage> apuimedo, kinda - right now its xenial development + juju built from source with a patch...
<apuimedo> juju ssh midonet-api/0
<apuimedo> FOO=`sudo midonet-cli tunnel-zone list`
<apuimedo> crap
<apuimedo> wrong command
<jamespage> hehe
<apuimedo> sudo -i
<apuimedo> midonet-cli -e tunnel-zone list
<apuimedo> it will give you a uuid
<apuimedo> then
<apuimedo> midonet-cli -e tunnel-zone uuid_you_got member list
<apuimedo> if you have some member, we should be fine
<jamespage> I don't have a tunnel zone afaict
<apuimedo> otherwise you have the same bug I do with maas
<jamespage> $ tunnel-zone list
<jamespage> tzone tzone0 name default_tz type gre
<jamespage> is that right?
<apuimedo> yes
<jamespage> oh
<jamespage> tzone0?
<apuimedo> then tunnel-zone tzone0 member list
<apuimedo> tzone0 is an alias
<jamespage> zone tzone0 host host0 address 10.0.3.93
<apuimedo> ok, so that should be the ip of the compute machine
<jamespage> that appears ok
<jamespage> yeah - it is
<apuimedo> good
<jamespage> host list
<jamespage> host host0 name juju-2bb878e7-d45e-4422-890e-8778b0aff37c-machine-8 alive true addresses /10.0.3.93,/ffff:ffff:fe80:0:ffff:ffff:fef7:3fff,/127.0.0.1,/0:0:0:0:0:0:0:1,/192.168.122.1
<jamespage> that matches
<apuimedo> you'll get another member when you add neutron-agents-midonet related to midoent-agent
<jamespage> ok
<jamespage> doing so now...
<apuimedo> for the metadata
<apuimedo> otherwise the nova instances are not gonna get metadata for ssh keys and such
<jamespage> apuimedo, yeah I observed that already
<apuimedo> :-)
<jamespage> apuimedo, but was expecting that :-)
<apuimedo> jamespage: you're one step ahead
<jamespage> ip but not metadata
<apuimedo> in case you want to have external connectivity on the instances, I can tell you how to manually add an edge to one of the machines running midolman
<apuimedo> I helped alai do it yesterday
<jamespage> apuimedo, pls
<jamespage> that would be good before we get the gateway charm
<apuimedo> :-)
<apuimedo> is the floating range 200.200.200.0/24 good for you?
<jamespage> i can work with that
<apuimedo> okey dokey
<apuimedo> so create an external network in neutron
<apuimedo> that should automagically create a midonet "provider router" which you can see with midonet-cli -e router list
<apuimedo> then
<apuimedo> go to the compute node
<apuimedo> and do
<apuimedo> sudo ip link add type veth
<apuimedo> sudo ip link set dev veth0 up
<apuimedo> same for veth1
<apuimedo> enable sysctl forwarding
<apuimedo> sysctl -w net.ipv4.ip_forward
<apuimedo> sudo iptables -t nat -I POSTROUTING -s 0.0.0.0/0 -d 200.200.200.0/24 -j MASQUERADE
<apuimedo> sudo iptables -t nat -I POSTROUTING -s 200.200.200.0/24 -d 0.0.0.0/0 -j MASQUERADE
<apuimedo> jamespage: did you get the uuid of the provider router?
<apuimedo> i wonder if doing veth creation on lxc containers is problematic though
<apuimedo> jamespage: going out for lunch
<apuimedo> I'll be back in an hour or so
<jamespage> apuimedo|lunch, ack
<jamespage> apuimedo|lunch, agents running, host registered - just not part of the tunnel zone yet...
 * jamespage pokes more
<jamespage> apuimedo|lunch, doing lunch myself now
<jamespage> some bits on -agents-midonet
<jamespage> the template for /etc/neutron/neutron.conf is missing:
<jamespage> [agent]
<jamespage> root_helper = sudo /usr/bin/neutron-rootwrap /etc/neutron/rootwrap.conf
<jamespage> [oslo_concurrency]
<jamespage> lock_path = /var/lock/neutron
<jamespage> without which the dhcp agent is not able to create and manage namespaces
<jamespage> the unit running -agents-midonet is listed in 'host list' but is not registered in to the tunnel-zone
<jamespage> I'm running with kilo (and not mem)
<jamespage> apuimedo|lunch, exported bundle from my env - http://paste.ubuntu.com/14865744/
<apuimedo|lunch> that looks small
<apuimedo|lunch> jamespage: no config?
<apuimedo|lunch> ah yes, there is on some
<apuimedo|lunch> jamespage: oh, I added that to the neutron.conf template on friday, I may have forgotten to push
<apuimedo|lunch> that's odd
<apuimedo|lunch> jamespage: can you send me the logs for the midonet-api unit?
<jamespage> apuimedo|lunch, yup
<jamespage> apuimedo|lunch, http://paste.ubuntu.com/14866135/
<apuimedo> thanks
<apuimedo> odd, I only see one ADD_MEMEBER
<apuimedo> *MEMBER
<apuimedo> and one nice failure
<apuimedo> it seems zookeeper went down and then recovered
<jose> marcoceppi_: busy atm?
<jamespage> apuimedo, what do I need to poke to get things functional?
<apuimedo> jamespage: well, we can manually add the neutron-agents-midonet host to the tunnel zone
<apuimedo> midonet-cli -e tunnel-zone uuid_of_the_tunnel_zone add member host uuid_of_the_host address ip_address_of_the_host
<apuimedo> did you get to do the iptables and veth creation I told you about before?
<jamespage> apuimedo, just getting to that now
<apuimedo> ok
<jamespage> apuimedo, veth in containers it fine - its all namespace OK
<jamespage> apuimedo, done those setup steps
<apuimedo> cool
<jamespage> apuimedo, I'm basically doing this - https://www.rdoproject.org/networking/midonet-integration/
<jamespage> ?
<apuimedo> without the bridge
<apuimedo> since it it not necessary
<apuimedo> did you create an external neutron network?
<jamespage> apuimedo, yes - I can see the router in midonet-cli, but not in neutron - is that right?
<apuimedo> it is :-)
<apuimedo> it's like a parent router
<jamespage> okay
<apuimedo> midonet-cli -e router uuid_of_the_provider_router add port address 200.200.200.2 net 200.200.200.0/24
<apuimedo> this will give you a uui
<apuimedo> *uuid
<jamespage> apuimedo, done
<apuimedo> midonet-cli -e host uuid_of_the_compute_node_in_host_list add binding port router router_uuid port previous_uuid interface veth1
<apuimedo> after this command, from nova-compute/0, you should be able to ping 200.200.200.2
<jamespage> apuimedo, hmm - I may have an issue with cassandra - one second
<apuimedo> ok
<jamespage> apuimedo, hmm so midonet-agent is not configuring the connection to cassandra with credentials
<apuimedo> no
<jamespage> apuimedo, I can turn auth off on cassandra
<jamespage> its on by default
<jamespage> one sec
<apuimedo> Didn't I send you the charm config for Cassandr?!
<jamespage> apuimedo, erm no
<jamespage> at least I don't think so
<apuimedo> mmm
<apuimedo> let me check
<jamespage> apuimedo, ok re-deployed - still not using mem - but the config in midonet-cli looked good and I can see midolman logging port events on the edges...
<jamespage> that said no cigar on the ping yet
<apuimedo> mmm
<apuimedo> ip -4 route on the host where there's the veths
<jamespage> default via 10.0.3.1 dev eth0
<jamespage> 10.0.3.0/24 dev eth0  proto kernel  scope link  src 10.0.3.5
<jamespage> 192.168.122.0/24 dev virbr0  proto kernel  scope link  src 192.168.122.1
<jamespage> apuimedo, ^^
<jamespage> apuimedo, I saw:
<jamespage> 2016.02.03 16:46:05.339 INFO  [midolman-akka.actor.default-dispatcher-5] datapath-control -  Datapath port veth1 added
<jamespage> 2016.02.03 16:46:05.339 INFO  [midolman-akka.actor.default-dispatcher-5] datapath-control -  Port 4/veth1/a8d30ec9-e44d-42fa-9d90-0d02203581cf became active
<apuimedo> jamespage: you are missing a link scope route for 200.200.200.1
<apuimedo> did you do
<apuimedo> ip addr add 200.200.200.1/24 dev veth0?
<jamespage> apuimedo, erm I have now
<jamespage> missed that - apologies
<apuimedo> no problem ;-)
<apuimedo> oh, add a subnet with 200.200.200.0/24 to the public net
<jamespage> apuimedo, already done
<apuimedo> can you ping 200.200.200.2
<jamespage> apuimedo, nope
<jamespage> should i be able to see that bound anywhere on the server?
<apuimedo> mmm
<apuimedo> on midonet-cli
<apuimedo> create a router
<apuimedo> in neutron
<jamespage> apuimedo, plugged into external and internal networks?
<apuimedo> yes
<apuimedo> but even so, we should have ping already, yesterday with alai we already had it at this step
<apuimedo> I feel like we probably missed something
<apuimedo> iptables -n -L -t nat
<apuimedo> jamespage: oh, and
<alai> yes thanks apuimedo , i think we are getting very close
<apuimedo> midonet-cli router provider_router_uuid port list
<apuimedo> maybe we forgot to bind the right address
<jamespage> apuimedo, how do I unbind the previous bound router port?
<apuimedo> why, the current one doesn't have address jamespage ?
<jamespage> apuimedo, so i created the router in neutron, and plugged it in
<jamespage> router router0 name MidoNet Provider Router state up
<jamespage> router router1 name public-router state up infilter chain6 outfilter chain7
<apuimedo> that's good
<jamespage> I now have extra ports for the neutron created router
<apuimedo> can you show me the output of the provier router port list?
<jamespage> apuimedo,
<jamespage> port port8 device router1 state up plugged no mac ac:ca:ba:72:6f:7c address 200.200.200.2 net 169.254.255.0/30 peer router0:port2
<jamespage> port port10 device router1 state up plugged no mac ac:ca:ba:7b:6c:76 address 192.168.21.1 net 192.168.21.0/24 peer bridge1:port0
<jamespage> port port9 device bridge1 state up plugged no peer router1:port1
<jamespage> apuimedo, erm maybe
<apuimedo> mmm
<jamespage> apuimedo, I'm not finding midonet-cli very discoverable
<jamespage> apuimedo, ah I also have to dial into another call...
<jamespage> biab
<apuimedo> jamespage: yes, midonet-cli takes a while to get used to
<apuimedo> ok, we can continue later/tomorrow
<agunturu> Is it possible to get the list of parameters to a JUJU action?
<agunturu> The âjuju action definedâ is listing the actions, but not the parameters.
<agunturu> ubuntu@juju:~/mwc16charms/trusty/clearwater-juju$ juju action defined ims-a
<agunturu> create-user: Create a user.
<agunturu> delete-user: Delete a user.
<marcoceppi_> agunturu: juju action defined --schema
<agunturu> Hi marcoceppi_. Thanks that works
<marcoceppi_> agunturu: cheers!
<firl> lazypower|summit you around?
<marcoceppi_> firl: we're GMT+1 atm, he might be in bed
<firl> marcoceppi_: gotcha thanks!
#juju 2016-02-04
<yuanyou> jamespage: Hi , as we known, neutron-gateway charm have a config "ext-port", and i want get the "ext-port" value in my own charm onos-controller, how should i do ? thanks
<deanman> Is it possible to customize the look and feel of juju-gui charm to your needs?
<apuimedo> jamespage: you created a new merge proposal for nova-cloud-controller besides https://code.launchpad.net/~celebdor/charms/trusty/nova-cloud-controller/liberty/+merge/283709?
<apuimedo> ?
<jamespage> apuimedo, sorry - branch hell
<jamespage> deleted not
<jamespage> now
<apuimedo> ;-)
<apuimedo> yes, it's just that I have some extra branches for backports I do
<apuimedo> since when somebody wants to try it I usually backport the changes into stable
<jamespage> apuimedo, I know we discussed this but I can't remember the rationale for not having the nova-metadata service co-located with the neutron-dhcp and neutron-metadata agents in neutron-agents-midonet like we have in the neutron-gateway charm?
<jamespage> rather than using the one provided by the nova-cc service
<apuimedo> jamespage: We did :-)
<apuimedo> now that neutron-agents-midonet is split from neutron-api I may re-evaluate in a future version
<apuimedo> but the deployment recommendation in midonet is usually like that
<apuimedo> jamespage: you suggest adding some nova config in neutron-agents-midonet
<apuimedo> so that it runs neutron-dhcp neutron-metadata and nova-api-metadata, right?
<jamespage> apuimedo, well i have two ideas - one to run the nova-metadata service on neutron-agents-midonet and drop the shared-secret stuff across both charms - each neutron-metadata agents just talks to localhost with a unit local secret
<jamespage> so like you say
<jamespage> or
<apuimedo> well, they wouldn't even need a secret setting then
<jamespage> apuimedo, indeed
<jamespage> well you do but just for the local neutron-nova comms
<jamespage> no configuration at charm level.
<jamespage> apuimedo, that's what neutron-gateway does today...
<apuimedo> jamespage: IIRC, if you leave it blank it also works :P
<jamespage> really?
<apuimedo> that's what I saw in some of our cloud configs
<jamespage> apuimedo, that sounds like a security vulnerability to me
<apuimedo> well, I'd say it is an option to disable the "secure" comm
<apuimedo> I opted to keep it in my charms though
<jamespage> # If authentication token is set, authenticate
<jamespage> ha!
<apuimedo> ;-)
<apuimedo> precisely
<jamespage> so only auth is someone actually set the value
<apuimedo> jamespage: do you know how much of a nova.conf chunk I'd have to add if I want to give it a shot?
<jamespage> apuimedo, I know exactly
<apuimedo> :-)
<jamespage> apuimedo, nova needs rabbitmq and the keystone auth data as the neutron agents need - so no new interfaces...
<jamespage> apuimedo, something like - http://paste.ubuntu.com/14876487/
<jamespage> that's pulled from neutron-gateways kilo template for nova
<apuimedo> right
<apuimedo> so I'll give it a shot later
<apuimedo> I'm now fixing the linting and unit tests of midonet-api
<apuimedo> thanks for the template
<jamespage> apuimedo, how about I do that for you now? I'd rather work that than have the manual configuration of shared secret in nova-cc
<jamespage> apuimedo, I can test locally
<jamespage> apuimedo, are you worried about upgrades to charms atm?
<apuimedo> well, that would be nice
<apuimedo> :-)
<jamespage> apuimedo, ok working with kilo ok
<jamespage> apuimedo, I'll need to re-deploy to test juno
<apuimedo> cool
<jamespage> apuimedo, actually
<jamespage> juju create-model midonet-juno
<jamespage> juju deploy midonet-juno.yaml
<jamespage> \o/
<jamespage> hurrah for 2.0 models
 * apuimedo wants models
<apuimedo> why didn't I start working on Juju now instead of before :'(
<apuimedo> it's starting to get much easier
<jamespage> apuimedo, apparently my laptop is not up for that much stuff grabbing memory...
<jamespage> ETOMUCHJAVA
<jamespage> lol
<apuimedo> jamespage: xD
<apuimedo> ETOMUCHJAVAANDPOSSIBLYSCALA
<jamespage> lol
<apuimedo> you need like 10G for the whole openstack plus zk+cass+midonet agents
<jamespage> and maybe a bit of auto config by mysql grabbing anything left..
<jamespage> x2
<jamespage> apuimedo, OMG that's pretty much the zookeeper charm I wrote many years ago
<jamespage> apuimedo, well it still works - nice to know :-)
<apuimedo> jamespage: :-)
<jamespage> apuimedo, there are more recent choices from the big data team:
<jamespage> https://jujucharms.com/apache-zookeeper/trusty/1
<apuimedo> jamespage: shouldn't this be promoted to cs:trusty/zookeeper then?
<jamespage> apuimedo, tricky
<apuimedo> or at least cs:xenial/zookeeper
<apuimedo> for backwards compatibility
<jamespage> cory_fu, ^^ what do you think
<jamespage> I suspect that moving from zookeeper -> apache-zookeeper is not a supported charm upgrade path
<apuimedo> jamespage: why did you move the merge proposal from needs review to wip?
<apuimedo> without having it run yet, it looks good
<jamespage> apuimedo, cause I'm still testing it - I think its ok - just verifying with juno
<apuimedo> s/it run/run it/
<apuimedo> cool
<apuimedo> your RAM will suffer
<jamespage> I have some lxd/cloud-init oddisms I have  to deal  with until a bug it resolved
<vaewyn> Hey all...  found a bunch of environment variables but have yet to find one with the charms name in it... is there such a beast I can use in the hooks? Wanting one of the hooks to act differently depending on if the charm is directly deployed... or is used as a layer for another charm
 * arosales taking a first look at the LTE eNB charms in the review queue
<arosales> vaewyn: hello. I may not understand your question, but you can't use a charm as a layer for another charm.  You have to create an explicit layer that is used to build a charm.
<arosales> So I can't use https://jujucharms.com/mysql/ as a layer for another charm. If you wanted to build a charm using layers you would need to include a layer from http://interfaces.juju.solutions/
<arosales> vaewyn: ^ hth
<vaewyn> arosales: I guess maybe I should say more what I am trying to accomplish then...  I have a charm currently that sets up environment and downloads a central codebase... then I have subordinate charms to deploy various apps off it it... was trying to reform all that into layers so I can still reuse the codebase charm in each application but end up with a single charm to deploy made of layers instead
<vaewyn> arosales: the codebase charm has no relations or anything really... it's just a "get this crud on the disk" charm
<vaewyn> arosales: I'm a very newb to this so I may be totally heading the wrong way but this path looked interesting
<vaewyn> arosales: reading more... now I see what you mean :)  Thanks for the direction
<arosales> vaewyn: your approach seems very reasonable. It sounds like what we call a "framework" charm, ie rails or java
<arosales> but for that "codebase charm" I think you just need to create a layer.
<arosales> ref = https://jujucharms.com/docs/devel/developer-layers && https://pythonhosted.org/charms.reactive/
<arosales> vaewyn: ^
<vaewyn> arosales: It seems kindof inferred...  but are layers always processed/completed in the order of inclusion? ie... are the reactive portions just for interaction with other charms or for internal change events also as each layer gets slapped together?
<arosales> vaewyn:  layers give you 2 basic pieces of functionality to build a final charm.
<arosales> 1. "layers" give you the ability to react to "states" to model how the service, ie code how the internal events are handled
<arosales> 2. "interfaces" give you the ability to easily react to "state" when communicating with other applications (charms)
<arosales> vaewyn: ^
<arosales> btw, for reference here is the ruby layer http://interfaces.juju.solutions/layer/openjdk/ and the java layer = http://interfaces.juju.solutions/layer/openjdk/
<vaewyn> arosales: ahh... those are much cleaner.. the apache-php layer seems to blur those lines you stated a bit... so I was a bit confused
<arosales> ya apache-php may blur the lines a little from "run-time" and app
<arosales> vaewyn: I hope that helps, let us know how your charming goes
<vaewyn> arosales: does yes... thank you! and will do
<jamespage> apuimedo, ok so that works on juno as well
<apuimedo> great jamespage
<jamespage> apuimedo, I love the fact that java is so sensitive for DNS crappy ness
<apuimedo> xD
<jamespage> apuimedo, https://bugs.launchpad.net/juju-core/+bug/1540061
<mup> Bug #1540061: juju 2.0 alpha + local LXD provider: dns hostname inconsistency <juju-release-support> <lxd> <juju-core:Triaged> <https://launchpad.net/bugs/1540061>
<jamespage> apuimedo, until the first dhcp lease renewal everything is called 'ubuntu' in the dnsmasq instance...
<apuimedo> jamespage: also, alai reported some lxc containers in maas env not getting "search domainname" in /etc/resolv.conf
<apuimedo> which is problematic
<apuimedo> jamespage: so I'll merge your proposal
<jamespage> apuimedo, that sounds like old MAAS behaviour to me - I'' check
<apuimedo> jamespage: I didn't experience it
<apuimedo> it was on oil
<jamespage> apuimedo, anyway - if you're ok with my changes to neutron-agents-midonet, lets drop https://code.launchpad.net/~celebdor/charms/trusty/nova-cloud-controller/liberty/+merge/283709
<jamespage> its really not needed any longer \o/
<alai> jamespage, apuimedo : it's maas 1.9 we are using
<apuimedo> jamespage: there's the part of selecting the libvirt vif driver
<jamespage> apuimedo, I think that may only be required in the nova-compute units
<apuimedo> yeah, I thought so too
<jamespage> I've been running with stable n-cc charm and its working fine...
<apuimedo> alright, I'll abandon the merge
<jamespage> apuimedo, awesome
<apuimedo> jamespage: I'll never get used to a brit saying awesome instead of brilliant :-)
<apuimedo> jamespage: you had to manually add the compute hosts to the tunnel zone, right?
<jamespage> apuimedo, nope - they all got added automatically
<jamespage> \o/
<jamespage> apuimedo, I had to work around that bug I detailed with a quick dhclient -r && dhclient before stuff got installed but apart from that 0 hacks...
<jamespage> apuimedo, instances are getting dhcp IP addresses and can access the metadata service fine
<jamespage> apuimedo, still can't get the external network bit going...
<apuimedo> ok
<jamespage> but that may be my test right
<jamespage> rig being a bit bleeding edge...
<apuimedo> well, I'm fixing the midonet-api unit tests and changing how the midonet-api registers the hosts into the tunnel zone
<apuimedo> to use contrib.network.ip to get the ip of the compute node
<apuimedo> and get the uuid from the relation
<apuimedo> because otherwise it doesn't work with maas with dns domains :(
<jamespage> apuimedo, ack
<jamespage> apuimedo, I'll post a summary of TODO to the bug
<apuimedo> very well ;-)
<apuimedo> I am now removing the split branch
<apuimedo> and leaving only /trunk
<magicaltrout> alright chaps, charm publish, does that stuff work for us minions if I'm running the PPA?
<magicaltrout> or do I have to wait?
<jrwren> magicaltrout: it works
<magicaltrout> boom!
<magicaltrout> cool
<jhobbs> is thre some way to tell juju which interfaces to associate with containers it creates?
<jhobbs> we're using MAAS 1.9's vlan interface configuration and want containers to be connected to one or more interface
<jhobbs> via a bridge
<jhobbs> MAAS can't create the bridge - can juju do it?
<jhobbs> this is the MAAS bug for creating the bridge https://bugs.launchpad.net/maas/+bug/1512187
<mup> Bug #1512187: ability to create bridges on MAAS nodes for use without Juju <MAAS:Triaged> <https://launchpad.net/bugs/1512187>
<magicaltrout> in actions, can you set default parameter values?
<magicaltrout> of course in the action I can test to see whether its set or not, but if a sensible default was set if it wasn't passed in by the user,it would save me writing check logic
<handicraftsman> Hello
<handicraftsman> How i can install juju on my debian sid?
<handicraftsman> It does not supports ppas
<jrwren> handicraftsman: you can use the long form of ppa added to sources.list
<handicraftsman> I'll choose xenial version, because it's not finished and ubuntu is always build on debian sid
<jrwren> handicraftsman: go here: https://launchpad.net/~juju/+archive/ubuntu/devel/  expand the green "Technical details about this ppa" link and for sid, i'd recommend picking wily, but YMMV.  You'll need to run `sudo apt-key adv --keyserver keyserver.ubuntu.com --recv-keys C8068B11 `
<handicraftsman> I'll just install it on my linux lite laptop xD
<marcoceppi> magicaltrout: yes, you can define a default
<magicaltrout> marcoceppi: thanks
<magicaltrout> somehow this hotel's wifi is worse than the other one, so testing this stuff is taking time! :P
<marcoceppi> magicaltrout: boo
<marcoceppi> magicaltrout: now that everyone is gone it's rather peppy here ;)
<magicaltrout> hehe
<magicaltrout> lack of porn being viewed......
<jrwren> i'm roughly following the vanilla layers walkthrough and build says it can't find the layer i'm using.
<jrwren> Does the walk through skip a part where I download layers to INTERFACE_PATH and LAYER_PATH?
<marcoceppi> jrwren: no, you don't need to download layers
<marcoceppi> jrwren: can you pastebin the output
<marcoceppi> jrwren: (layers are fetched from the index)
<jrwren> http://pastebin.ubuntu.com/14882887/
<jrwren> let me check my charmtools version. maybe i didn't re-add ppa after upgrading to wily or something stupid.
<marcoceppi> jrwren: yeah, was about to ask
<marcoceppi> jrwren: charm version should do it
<jrwren> 1.11.1
<marcoceppi> that's latest
<marcoceppi> jrwren: is your layer somewhere?
<jrwren> marcoceppi: no, its brand new.
<marcoceppi> jrwren: could you put it somewhere :)
<marcoceppi> jrwren: also charm build -l DEBUG will be helpful
<jrwren> http://pastebin.ubuntu.com/14882909/
<marcoceppi> jrwren: layer.yaml file?
<jrwren> includes: ['layer:nodejs', 'interface:mongodb']
<jrwren> marcoceppi: https://github.com/jrwren/parse-server-example/tree/no-go-layers
<marcoceppi> jrwren: you have a few problems, not related
<marcoceppi> I don't think - in reactive file paths will work because of how they're imported
<jrwren> marcoceppi: tons, i'm sure :)
<marcoceppi> jrwren: works for me. Unset your LAYER_PATH and INTERFACE_PATH
<marcoceppi> if there's nothing in there you don't need to set
<jrwren> same results.
<marcoceppi> http://paste.ubuntu.com/14882948/
<jrwren> amazing
<marcoceppi> this is what it should do jrwren http://paste.ubuntu.com/14882969/
<marcoceppi> jrwren: does http://interfaces.juju.solutions load for you?
<jrwren> marcoceppi: yes, i even just used curl to make sure it loads from the remote system on which I'm writing this
<jrwren> marcoceppi: i added some debugging, i get 'build: fetch FetchError:No fetcher for url: layer:nodejs'
<jrwren> indeed there is no layer fetcher
<marcoceppi> jrwren: did you ever pip install charmtools?
<jrwren> marcoceppi: not system wide. Maybe in a venv someplace, but that venv isn't active.
<marcoceppi> ack
<marcoceppi> otp, brb
<jrwren> marcoceppi: tried on a different system, can confirm its something wrong with that system.
<jrwren> marcoceppi: thanks for walking me through.
#juju 2016-02-05
<bolthole> hi Juju mavens.... I'm trying to find some kind of comparision between juju and kubernets
<bolthole> They seem to be somewhat competing tech, both being "orchestration tools"..  but searches only ever seeem to bring up "how to deploy kubernetes USING juju" !!
<jrwren> bolthole: juju is a layer higher ;]
<jrwren> bolthole: kubernetes is a container management system, juju is a service modeling system. Some part of your juju modeled services might be containers, may not be, and some part of those may be kubernets containers.
<jrwren> bolthole: does that make sense?
<bolthole> jrwren .... kind of..  Some coworkers are claiming that kubernetes is also a service orchestration tool
<bolthole> at this point, I am somewhat familiar with juju, but not at all with kubernetes. So need more info to make good comparison to management
<jrwren> bolthole: i don't know enough about kubernetes to say anymore than I already have. Sorry.
<firl> marcoceppi you guys still at the summit?
<jacekn> hello. Can somebody tell me if https://bugs.launchpad.net/charms/+bug/1538573 is you charmers' radar?
<mup> Bug #1538573: New collectd subordinate charm <Juju Charms Collection:New> <https://launchpad.net/bugs/1538573>
<stub> jacekn: It needs a merge proposal targetting lp:charms/trusty/collectd to get on the review list.
<jacekn> stub: but that's not mergeable, it completely new charm
<stub> jacekn: ok. I can give it a look over and try and help with Amulet, but one of the eco team will need to help with landing once they are back from their thing.
<jacekn> stub: cool, thanks. FTR I asked here before about amulet, nobody was able to help looks like it was never used with "juju-info" relations
<stub> What sort of things did you find that failed?
<stub> I fall back to raw juju commands or plugins like juju-wait when Amulet doesn't stretch far enough.
<jacekn> stub: can't remember exactly but it was something about cs:ubuntu and the collectd relation not being added
<jacekn> stub: I think it may have been because "juju-info" is not declared interface or something like that
<stub> That sounds like it is coming from juju-deployer, which Amulet uses for the initial deploy.
<jacekn> could be yeah, anyway have a look when you have a moment
<Laney> hi juju-ers
<Laney> I have a subordinate service (nrpe-external-master) and now I want the thing it is subordinate to to drop some config into it (nagios checks) on upgrade
<Laney> I wrote a hook nrpe-external-master-relation-changed and then did juju upgrade-charm my-unit but it didn't get called
<Laney> any way to get it to work so that I can add more in future and just be able to run some command?
<rick_h__> Laney: i'd suggest an email yo the list. i think folks might have a solution and be able to discuss it with a wider group there.
<Laney> rick_h__: ok, which list?
<Laney> I'm sure there is a solution as like 9999 Canonical charms use nrpe-external master :P
<Laney> found juju@l.u.c
 * Laney hopes someone is moderating the list because he doesn't want to subscribe
<rick_h__> Laney: yes the juju@ list
<Laney> rick_h__: thanks, I sent something
<Laney> no I didn't, it got rejected
<Laney> bah
<lazypower|summit> bolthole o/
<bolthole> hi lazypower
<bolthole> did you see my question yesterday ?
<lazypower|summit> bolthole no, but i did see the this morning about the differences between k8s and juju
<lazypower|summit> bolthole which is one that i can answer :)
<firl> lazypower|summit: you around?
<lazypower|summit> bolthole - the big and main differences is that Juju is a modeling tool / orchestrator - while Kubernetes is a container orchestrator. They have somewhat similar properties but the sidewalk ends pretty quickly during a comparison. Juju allows you to model things like networks, storage across providers. K8s allows you to describe the deployment of containers, and scale/"manage" them. But its very inflexible in terms of making services
<lazypower|summit>  talk to one another consistently, and its quite an exercise in abstract thinking, as you have to work with yaml files describing pods and services (akin to charms and units)
<lazypower|summit> firl o/ hey there
<firl> have time for the kubernetes stuff?
<lazypower|summit> firl I'm hacking on some end of week stuff, but i started to dig into the failures in the head of the k8s work
<lazypower|summit> i've got a few potential fixes sitting on my disk that need testing
<lazypower|summit> firl is there anything specific i can help with aside from being super latent to deliver a fix?
<firl> haha just waiting to do the install
<firl> I am on PTO next week
<firl> whatâs the github for the layer again?
<lazypower|summit> sorry again :( summit/fosdem pretty well ate up my week and i'm out until Thurs of next week on PTO as well
<lazypower|summit> http://github.com/mbruzek/layer-k8s
<lazypower|summit> the specific layer thats having an issue is the flannel layer, its not been re-worked to be compliant with the etcd changes to support a proper cluster
<firl> gotcha
<lazypower|summit> http://github.com/chuckbutler/layer-flannel  (mostly undocumented as it was intended to be temporary)
<firl> so whatâs the best way to install kubernetes then?
<lazypower|summit> if you use cs:trusty/etcd-4
<lazypower|summit> it'll be fine as is from tip of the built charm living in mbruzek's github repo
<firl> so use etcd-4 and the layer-k8s?
<firl> and it should work
<lazypower|summit> yep. you will need to run `charm build` in the k8s layer
<lazypower|summit> you cant just deploy the layer and expect magic :)
<firl> haha ok
<firl> is this going to be close to the final solution ( curious )
<lazypower|summit> firl - we should have a published bundle in the store ~ the time you get back from PTO
<lazypower|summit> we have a line item to deliver that by end of week in our namespace
<firl> oh awesome
<firl> I didnât realize it was that soon
<lazypower|summit> pretty close, yeah
<firl> I should be able to wait for the bundle then
<lazypower|summit> we're still cycling on breaking apart the components
<lazypower|summit> so you can independently deploy/scale api-server vs nodes
<firl> out of curiousity how do I setup the external net for floating ipâs essentially
<firl> for the kubectl service layers
<lazypower|summit> That depends on which SDN layer is in the output bundle
<firl> kk
<lazypower|summit> if its weave, you use weave router and reverse proxy nodes
<lazypower|summit> if its no sdn - you have to do some iptables schenanigans and supposedly we can get some floating IP bits from the coud provider, but that gets expensive in a hurry
<lazypower|summit> plus you're limited to i think like... 7... unless you request more
<firl> eww
<lazypower|summit> if you use flannel, basically same principal as weave but i'm still owrking through the nuances of that
<firl> the bundle wonât have any by association default?
<lazypower|summit> i'm trying to nail down some time to work with illya of weaveworks to do a full integration w/ weave,  and similar with flannel but haven't heard back from my contact(s) at CoreOS
<lazypower|summit> we'll publish a different flavored bundle with each configuration we're going to support
<lazypower|summit> oh! and we may even have a go w/ the fan :)
<lazypower|summit> but dont ask me about that one yet, i still have a lot of catching up to do with our darling SDN solution
<firl> haha
<haasn> I don't understand how juju is supposed to work. I have a MAAS setup, and I understand that juju charms let me automatically deploy services on this MAAS. But before I am able to do this, I need to bootstrap juju? And this needs another machine for some reason? What is this machine doing?
<firl> sounds good, it will be interesting to see, thanks man
<firl> I will check back in after PTO
<lazypower|summit> haasn - juju deploys a controller - which is responsible for managing each charm you deploy
<lazypower|summit> firl np np, thanks for the interest :)
<haasn> lazypower|summit: Can I just install this controller âlocallyâ instead of needing a dedicated machine for just that?
<haasn> i.e. on some machine of my choosing
<haasn> (in this case it would be the same as the MAAS controller machine)
<haasn> I guess I could also set up a virtual machine on the MAAS controller, add that to the MAAS, then somehow coax juju bootstrap into choosing this machine for juju.. but it still seems like overkill
<lazypower|summit> Not easily, no.
<lazypower|summit> that would be my suggestion
<lazypower|summit> to register a VM, tag it, and use tags as a constraint
<haasn> I found juju-local which lets me use lxc containers, but I don't want to use lxc containers for *everything*, just the controller
<lazypower|summit> juju bootstrap --constraints="tags=bootstrap"
<haasn> ah okay
<lazypower|summit> if maas knew about LXD containers (which seems like an odd duck as lxd containers are very machineish but they dont have all the same properties of a machine) it could support that :)
<jrwren> haasn: once you bootstrap you don't have to waste machine 0 to just controller. you can deploy to lxc:0 to use that same machine.
<lazypower|summit> jrwren - aiui, we're discouraging that moving forward with the 2.0 changes
<haasn> I guess I'll make as small a VM I can on some slow VM host
<jrwren> lazypower|summit: why is that?
<haasn> It's not like the machine is doing anything is it?
<lazypower|summit> jrwren - i'm going to defer to rick_h__  to make sure i'm not spreading FUD
<jrwren> oh, well, 2.0 will be lxd instead of lxc
<jrwren> but same principle should apply. i'll be sad if it doesn't
<lazypower|summit> haasn quite a bit actually, its only coordinating every service you deploy into the model, ever :)
<lazypower|summit> jrwren have you used JES yet?
<haasn> âcoordinatingâ?
<lazypower|summit> the 'default' bootstrap model "special" - when you add a new model no the controller there is no machine 0
<lazypower|summit> s/no/on
<jrwren> lazypower|summit: yes, ah yes, I see what you mean.
<jrwren> lazypower|summit: still, it works ok with JES
<lazypower|summit> haasn correct, it coordinates event states on many units that belong to many applications
<lazypower|summit> jrwren - how do you deploy to 0 (eg: the controller) when you are in a different model, and there is no machine 0 until you either add-machine or juju deploy "thing"
<jrwren> haasn: i agree with you, it shouldn't be doing anything unless i'm deploying or adding relations ;]
<jrwren> lazypower|summit: the control lives in a model itself, you can use that model for something.
<lazypower|summit> heh
<lazypower|summit> ok
<lazypower|summit> have fun thinking that ;)
<jrwren> *shrug*
<jrwren> if I have to burn yet another machine for a control, i'm sad.  that is all I know for sure.
<lazypower|summit> i mean you can colocate stuff on the controller, in that "special" environment
<haasn> I guess the implicit assumption in enterprise software is that you have hundreds of machines, so one more control machine is just another figure
<jrwren> but I guess that is just a good reason to run JES controller in a manual model in something like maas so a machine isn't wasted
<jrwren> haasn: yes, I agree, that seems to be an implicit assumption. I think it is more cloud-scale assumption than enterprise.
<bolthole> sooo.. speaking of juju-local... no support for using juju+docker, instead of juju+lxc. how come? just that noone interested in doing the work?
<lazypower|summit> bolthole - oh? juju can deliver docker, and deliver docker payloads
<lazypower|summit> http://github.com/juju-solutions/layer-docker
<bolthole> thats not what I asked
<bolthole> i think
<bolthole> What if I just want to  run juju, with environment=local, but use docker containers instead of lxc. but nothing else changes?
<bolthole> normal juju use other than that
<bolthole> (i'm new to docker, so I dont know if this even makes sense for sure)
<bolthole> is it that the charm infrastructure relies too heavily on assumptions about the OS layer inside lxc , but certain things are just not present inside docker containers by default?
<lazypower|summit> bolthole that doesn't really make sense due to the fact that app containers (eg: docker) are intended to be immutable artifacts
<lazypower|summit> while its reasonably safe to assume you could start one with a /sbin/init system running, we have machine/system containers to do this that are intended to run an init system
<lazypower|summit> bolthole - i suppose the short way to say that is "lack of effort, and the end product sounds like a hacky workaround to use docker for the sake of using docker"
<bolthole> mm. sometimes, though , due to "Reasons".. it is neccessary to use docker for the sake of docker :-/
<bolthole> we shall see
<jrwren> i am trying to use mongodb interface. after charm build it creates a hooks/relations/mongodb dir with modules in it. From where do these modules come?
<smartbit> on OSX I connect to http://127.0.0.1:6079 with vagrant image http://cloud-images.ubuntu.com/vagrant/trusty/current/trusty-server-cloudimg-amd64-juju-vagrant-disk1.box. I get an "Connecting to the Juju environment" and no login.
<smartbit> Tried config.trigger.after 'sudo route add -net 10.0.3.0/24 172.16.250.15 >/dev/null' at no avail.
<smartbit> Any suggestions?
<haasn> I created a container via the web UI and this container is stuck on âpendingâ (as per `juju status`). How can I figure out what's wrong? I attached to the container and it doesn't seem to be doing anything
<haasn> I can reach it from the outside, too
<haasn> (ping)
<rick_h__> haasn: can you ssh to the container and look for a log in /var/log/juju with some activity
<rick_h__> haasn: normally the container is running a unit of something so I'd say to grab the unit log, but if you just asked for a container there should be a machine log there I believe
<haasn> https://0x0.st/X_x.txt this maybe?
<rick_h__> haasn: hmm, replicasets are a mongodb thing. Is this in lxc locally? what version of juju?
<haasn> hmm no I'm getting those errors outside of the container too (/var/log/juju/machine-0.log seems to be present on both)
<haasn> I tried destroying the container with `juju destroy-machine 0/lxc/0` and now it's stuck in âstate: pending life: dyingâ
<rick_h__> haasn: is the container on machine 0?
<haasn> and still doesn't seem to be doing anything
<haasn> yes
<haasn> and now I get that âpublic no addressâ error about once every few seconds
<rick_h__> haasn: and this is on lxc provider? lxd? aws?
<haasn> no idea. `juju version` on the machine I'm controlling from is 1.25.0-wily-amd64 but I have no idea if that's relevant
<haasn> well, I managed to get rid of the container by using lxc-stop, lxc-destroy and juju destroy-machine --force
<haasn> Time for another attempt. Can I set the MAC address to use for containers when creating them? Part of the reason it was stuck to begin with may have had to do with the fact that my DHCP server didn't have an entry for the random MAC it generated
<haasn> (then again, it's not like the GUI asked me..)
<rick_h__> haasn: right but you ran "juju bootstrap" on some provider?
<haasn> Yes, juju bootstrapped itself into a machine
<rick_h__> haasn: where you configured an entry in your environments.yaml
<rick_h__> using the manual provider?
<haasn> Yes, I configured that file and added a single environment of type: maas
<haasn> This works, too
<haasn> I have multiple maas nodes that I can request just fine with juju and deploy charms on
<haasn> But now I want to have a maas node that hosts multiple LXC containers
<rick_h__> ah ok
<haasn> (for small charms not worth wasting a full server on)
<rick_h__> that helps clarify
<haasn> It's the creation of these LXC containers that doesn't seem to be working
<rick_h__> ok, so right, maas should be able to provide dhcp addresses to the containers on the machines
<haasn> I'm trying to figure out how to replicate the configuration the GUI tried to apply from the command line
<rick_h__> sure thing, sec
<haasn> I'm not using maas DHCP, I'm using an external DHCP server. I add the MACs of new servers manually
<rick_h__> haasn: https://jujucharms.com/docs/1.25/charms-deploying search for "placement"
<haasn> I understand that I can use that --to syntax to install multiple charms on a single machine, but that installs them âside by sideâ inside the same machine, on the same FS - they don't get isolation, they don't get individual IPs, etc. I figured I would prefer having them just live in LXC containers which are reasonably lightweight but provide some free isolation
<rick_h__> haasn: ah sorry, I thought it had a container example there as well but not seeing it.
<rick_h__> haasn: you do what the gui is doing by specifying a container path in the --to flag
<haasn> I guess the container example was something like âjuju machine add lxc:0â to add a new LXC container on machine 0
<haasn> which seems to run into the exact same result as doing it from the GUI
<rick_h__> haasn: juju help deploy
<rick_h__> there's an example there of deploying into a container
<haasn> rick_h__: I'm not that far yet. I need a container working before I can deploy services into it
<rick_h__> so juju deploy mysql --to 0/lxc/0 I think
<haasn> The container doesn't work, and the problem seems to be that it picks a random MAC address instead of letting me configure what MAC to use
<haasn> I'm looking for a way to solve this issue
<rick_h__> haasn: right, but saying you can have Juju create the container and put the service on it at the same time
<haasn> ah okay
<rick_h__> haasn: but you're right, if there's a dhcp issues with the mac then solving that first is a good thing
<rick_h__> haasn: I don't know of any way to inject the mac address there for you
<haasn> I guess the best âsolutionâ here would be to allocate the LXC container manually, give it a static IP of my choice, then add knowledge of this to juju? https://askubuntu.com/questions/671326/manually-provision-an-existing-lxc-container-in-juju-local
<rick_h__> haasn: unfortunately things are setup expecting maas to be doing the heavy lifting there in my experience.
<haasn> fair enough
<haasn> This alone might be a reason to set up an extra subnet for the maas and use its DHCP, if it makes life easy when allocating lots of lxc containers dynamically
<haasn> but I guess I can just âcolocateâ my services that run on the same machine (via --to) and stick to the current, simpler design that doesn't require any new network hardware
<haasn> and only use maas for physical machines or VMs
<haasn> Am I correct in assuming that (currently) `juju expose` pretty much does absolutely nothing for maas environments and all services are always publicly reachable?
<haasn> I heard that some new rewrite of juju in Go is going to have a juju firewall that manages which services to expose or not to expose, but I take it this is in the distant future?
<rick_h__> haasn: yea, that's true because there's not a provider security/firewall api in question
<rick_h__> haasn: I think there was something looked into to control a firewall, but it's not on all hosts by default atm
<haasn> How do service relations work? If I write âjuju deploy wordpress && juju add relation wordpress mysqlâ, does that mean wordpress âautomagicallyâ uses the mysql service as its backend? What happens if I just write âjuju deploy wordpressâ without having a mysql service anywhere else?
<rick_h__> haasn: so first, what will happen if you don't have a database. The wordpress service will be deployed and report that it's blocked, waiting for a database, before it's useful
<rick_h__> haasn: the two charms both declare they can communicate around a protocol by defining both ends of a relationship
<rick_h__> haasn: https://jujucharms.com/docs/1.25/charms-relations has a starter and there's other docs on writing them
<haasn> makes sense, thanks
<haasn> is the machine ID treated like a nonce? i.e. if I add and destroy machines often, it will always only grow?
<rick_h__> yes
#juju 2016-02-06
<haasn> What's the default behavior of juju when it has multiple nodes to choose from? Pick the one with the âweakestâ hardware that still meets the requirements?
<haasn> I just noticed that when installing a bundle of multiple charms it seemed to give them all 1G vm instances except for the ones that request at least 2G of mem, those got 2G
<haasn> (It had 1G, 2G and 4G instances to choose from)
<haasn> rick_h__: the documentation you linked for constraints includes juju set-constraints --service mysql mem=8G cpu-cores=4 as an example to set constraints âfor all future mysql servicesâ
<haasn> but running this on my environment gives me âERROR service "mysql" not foundâ
<haasn> Also for some reason, just doing the whole magic âjuju deploy Xâ sometimes works just like that, and sometimes the machine gets stuck while deploying: it just shuts off after the first step and never turns itself on again..
<haasn> (the machines are all nearly identical VMs with identical settings, they're all using qemu+ssh to control the power. So I have no idea why one would randomly work and the other would randomly fail. No idea if it's a juju or maas bug)
<apuimedo> jamespage: I have the tunnel zone registration fixed for maas
<lazypower|summit> jrwren - they come from the interfaces archive - http://interfaces.juju.solutions  - unless you have it cloned locally in $INTERFACE_PATH
<zdoge> Hello - I use Juju on a vSphere deployment, I wanted to ask if there is a easy way to select a specific vSwitch during the deployment of charms ? (1 vSwitch = 1 different /16 subnet). Thanks !
#juju 2016-02-07
<nagyz> jamespage, how can I add more ceph-osd units after I already deployed it? add-unit doesn't take any constraint (and would need a machine to be specified)
<rick_h___> nagyz: you can use the --to flag with add-unit. Check out juju help add-unit
<jamespage> nagyz, add-unit --to would work
<jamespage> like rick_h__ said :-)
<nagyz> --to tells it which machine to deploy to but that's not exactly what I want, right?
<nagyz> anyways, if I just call add-unit it seems to honor the tags that was originally used with deploy
<nagyz> I actually wanted it to grab new machines, just from the ones available with the tags I specified
<nagyz> jamespage, there is also something broken with the ceph-osd charm - I've opened a bug for it.
<marcoceppi> nagyz: so constraints are applied to a service, so it'll remember those constraints even with add-unit
<marcoceppi> nagyz: if you wnated to do something /other/ than those constraints, you would need to `juju add-machine --constraints <..>` then `juju add-unit --to <new-machine-#>`
<marcoceppi-shout> marcoceppi hello
<marcoceppi> marcoceppi-shout: hello
<marcoceppi> marcoceppi-shout: hello ello
#juju 2017-01-30
<Budgie^Smore> So is there a charm and / or a doc on standing up a private docker registry for k8s?
<lazyPower> Budgie^Smore - there's an open PR that hasn't made the shift to the upstream repository that adds this functionality into k8s itself  - https://github.com/juju-solutions/kubernetes/pull/97
<lazyPower> once that lands it'll get released with our next update to the charms, we have some additional prs that need to land to support that change. but its on the horizon
<Budgie^Smore> so again I am getting ahead of myself :)
<Budgie^Smore> I am pondering running nexus 3 ina container in the meantime (possible long term depending on the registry functionality)
<BlackDex> blahdeblah: It's about this bug https://bugs.launchpad.net/nrpe-charm/+bug/1633517
<mup> Bug #1633517: local checks arn't installed sinds nrpe-7 <NRPE Charm:New> <https://launchpad.net/bugs/1633517>
<blahdeblah> BlackDex: You've caught me a little late in my day, but if I get a chance I'll have a look at that if I have some spare time.
<kjackal> Good morning Juju world!
<BlackDex> blahdeblah: Thx, i don't mind i just need a answer/help, i know the time-difference is there, so no prob, i'm glad someone wants to take a look
<marcoceppi> morning kjackal o/
<kjackal> hell marcoceppi
<blahdeblah> BlackDex: Definitely keen to find out what's going on; what times (UTC) are you likely to be around most?
<BlackDex> blahdeblah: im in utc+1 (netherlands) so that would be utc 8:00 till around 16:00
<blahdeblah> BlackDex: ack - will try to catch you in your mornings
<BlackDex> oke :) cool thx!
<Zic> hi here, I'm asking myself if a complete teardown via Juju (and tearup) could permit a resolution of this issue: https://github.com/kubernetes/kubernetes/issues/40648
<Zic> because it does not seem that many people have encountered this one :/
<Zic> (I'm also trying to see in the Kubernetes' Slack if somebody already encountered this issue)
<marcoceppi> Zic: it might be? Are you on 1.5.2?
<Zic> marcoceppi: yep
<marcoceppi> Zic: I don't feel comfortable saying scrap and redeploy, esp if there's information we can capture from your deployment to improve CDK, but I also don't want you siting with a wedge'd cluster
<marcoceppi> Zic: lazyPower mbruzek & co should be online in the next few hours
<Zic> yeah, they helped a lot with the first party of this problem friday :)
<Zic> s/party/part/ :]
<marcoceppi> I'm cool with calling it a party instead of a problem ;)
<Zic> the first part was about all my pods crashed with this kind of error, and even some kubectl command (which actually "do/write" something, like create/delete, as get/describe works) return this kind
<Zic> upgrading to 1.5.2 pass all my Pods to Running
<Zic> but this weekend, when I tried to reboot some kubernetes-worker to test the resilience and the eviction/respawn of pods, I fell again in a sort of same problem :/
<Zic> oh, I found something strange, cc @ lazyPower
<Zic> http://paste.ubuntu.com/23892884/
<Zic> etcd again /o\
<Zic> I saw this via juju status, flannel/2 was marked as "waiting" indefinitely
<marcoceppi> Zic: you have 1,3, or 5 etcd machines?
<Zic> 5 etcd
<Zic> on 5 different VMs
<marcoceppi> doh, just saw the flanneld line
<Zic> I didn't have Flannel in this state when I opened the GitHub issue
<marcoceppi> interesting
<Zic> the last guilty for my first problem was also etcd
<marcoceppi> it seems it just started happening baesd on the logs
<Zic> do you know if I can do a "fresh start" of an etcd database for canonical-kubernetes without redeploy it from scratch? I don't have any important data for now in this cluster
<Zic> (= I have all my custom YAML file to redeploy all easily)
<Zic> files*
<marcoceppi> Zic: you should be able to just remove the etcd application then redeploy etcd and re-create the relations
<marcoceppi> you might get some spurrious errors during removal, and I'm not sure if it's a tested path or not
<marcoceppi> theoretically, you should be able to, but distributed systems are always a bit interesting in practice
<Zic> oh you mean via Juju? I thought to wipe the data directly in etcd, but even if I don't know well etcd, I suppose there is some "default-key/value" needed and provisionned via Juju deployment at bootstrap :/
<marcoceppi> Zic: not sure via etcd, from a Juju perspective the "keys" for TLS are actually a charm, the easyrsa charm, so since there's a CA running still it'll just get new certs, distribute those via relations and k8s will be reconfigured to point at that etcd
<marcoceppi> Zic: as for the etcd portion, there probably is a way to wipe, I'm just not sure of one
<Zic> marcoceppi: do you advice me to wait for lazyPower and ryebot to come up before smashing etcd in the head (it's not the first time etcd annoys me, even in other technologies than K8s/Vitess :))
<Zic> ?
<marcoceppi> Zic: It's probably a good idea to wait for them, but smashing etcd over the head might also be very theroputic. I'll make sure we have some people in EU/APAC timezone come up to speed with kuberentes knowledge so there's not so much a wait period
<Zic> oh I will never complain about timezone as it's a community support channel :) but great to here it
<Zic> hear*
<Zic> new info: old NodePort service are always listening, but with a new NodePort service just deployed, no nodes are listening on this port :/
<Zic> I think Flannel is the guilty but because it cannot contact etcd
<marcoceppi> stokachu: ping when you're around
<marcoceppi> or mmcc but I doub't you'll be around before stokachu
<zeestrat> Hey BlackDex, this might be a long shot, but the bug you posted on NRPE isn't related to https://bugs.launchpad.net/charms/+source/nagios/+bug/1605733 by any chance?
<mup> Bug #1605733: Nagios charm does not add default host checks to nagios <canonical-bootstack> <family> <nagios> <nrpe> <unknown> <nagios (Juju Charms Collection):New> <https://launchpad.net/bugs/1605733>
<stokachu> marcoceppi, ping
<marcoceppi> stokachu: hey man, what's conjurebr0 for?
<marcoceppi> stokachu: I'm doing some super weird things in a spell, and was curious
<stokachu> marcoceppi, it's mainly for openstack on novalxd to have that second nic for its neutron network
<stokachu> but it's always there so you could rely on it if need be
<Zic> mbruzek: hi, are you around? just saw you joined, sorry if I disturb you
<mbruzek> Zic I am here. What can I help with?
<Zic> mbruzek: remember the last time with my Ingress controller in CLBO? I thought all was fixed after upgrading to 1.5.2, but when I rebooted some nodes, the problem came back... I continued to look at the problem today and saw Flannel is completely messed up: http://paste.ubuntu.com/23892884/
<Zic> actually, all new NodePort are not working :s
<mbruzek> hrmm.
<mbruzek> There must be a problem with the reboot sequence. Do you think you could reproduce this?
<marcoceppi> stokachu: is it routable, and is it connected to the controller?
<stokachu> marcoceppi, yea it's routable, but not connected to the controller
<marcoceppi> stokachu: cool
<marcoceppi> stokachu: second question, can I reference a local bundle.yaml file in the spell metadata?
<Zic> mbruzek: I tried to restart the flannel service but with the same result, I didn't try to reboot another node to see if I can reproduce
<stokachu> marcoceppi, you would just place a bundle.yaml in the same directory as your metadata.yaml and make sure bundle-location isn't defined in metadata.yaml
<marcoceppi> stokachu: boss, thanks
<stokachu> np
<mbruzek> Zic: I need to know how you are rebooting these systems. Are you doing them in a specific order.
<marcoceppi> stokachu: I also wrote this https://gist.github.com/marcoceppi/e74c10178d1b730a36debc1f1622b2ce
<Zic> mbruzek: this morning, I just rebooted (via the `reboot` command) one kubernetes-worker
<marcoceppi> I'm using it in a modified step to merge kubeconfig files, this way the user only needs to set --context
<Zic> no other machines
<stokachu> marcoceppi, nice!
<marcoceppi> stokachu: updated with the step-01 file, nothing major
<marcoceppi> stokachu: last question, for headless, any thoughts on allowing a final positional argument for model name?
<marcoceppi> I love rando names as much as the next person, but I hvae some explicit model names I want to use
<stokachu> marcoceppi, one thing that i need to address for kuberentes is https://github.com/conjure-up/conjure-up/issues/568#issuecomment-272379010
<marcoceppi> stokachu: yeah, that's what my gist does
<marcoceppi> stokachu: it names the context, user, and cluster the same as teh model name from juju
<stokachu> marcoceppi, very nice, how do you access that with kubectl?
<marcoceppi> so they can live side by side with others. it doesn't do de-duping or collision detection yet, but I'll test it with my local spell first
<marcoceppi> stokachu: `kubectl --context <model-name>`
<marcoceppi> stokachu: and `kubectl set-context <model-name>` <- this is like juju switch
<stokachu> marcoceppi, very nice, once you're ready ill add those to the spells
<stokachu> marcoceppi, we have positional arguments for cloud and controller, so adding a third for model makes sense
<marcoceppi> stokachu: cool, I'll file a bug, not high priority but wanted to run by you first in person before throw'in another on the pile
<stokachu> marcoceppi, thanks, that's an easy one so it'll get addressed this week
<stokachu> marcoceppi, my other big todo is to make spell authoring cleaner with maybe a clean sdk or something
<stokachu> haven't quite figured out the best approach there for developer happiness
<marcoceppi> stokachu: yeah, I was taken aback by all the bash and python mixed
<ryebot> mbruzek Zic: I'm bringing up a cluster to attempt to repro
<stokachu> marcoceppi, yea i'd to use like something like charmhelpers for this
<marcoceppi> stokachu: you might beable to borrow a lot from the reactive style, where you use decorators in bash/python to trigger/register events
<mbruzek> Zic: And how do you see the problem? Are you just watching the output of kubectl get pods ?
<stokachu> marcoceppi, ah that's a good idea, would clean up a lot of the code
<marcoceppi> stokachu: and with the bash bindings, best of both worlds
<marcoceppi> stokachu: I'll file a bug for you there with some initial thoughts
<stokachu> marcoceppi, cool man appreciate it, i want to get that done sooner than later as well
<Zic> mbruzek: I'm running a permanent watch "kubectl get pods -o wide --all-namespaces" when I reboot a node, and watch at the pods state, during that, I also do some curl and telnet to various Ingress and NodePort of the cluster
<Zic> I think that's a "relica" from my first problem, as the step-to-reproduce is hard to describe, I'm asking myself if there is a way to reset the etcd cluster to default value (= wipe all data of the K8s cluster) without reinstalling the Juju
<Zic> the Juju full-cluster*
<Zic> it may be more simple to set up a step-to-reproduce path, or to confirm that's tied to the problem of the last time and my actual data is corrupted :s
<marcoceppi> stokachu: https://github.com/conjure-up/conjure-up/issues/635
<stokachu> marcoceppi, perfect thanks
<marcoceppi> stokachu: I'm onsite atm, but I'll file the developer one later tonight if you don't get to it before me
<stokachu> marcoceppi, cool man, yea ill file one
<stokachu> marcoceppi, fyi https://github.com/conjure-up/conjure-up/issues/636
<Zic> ryebot mbruzek: in the same strange behaviour, if I totally poweroff a node that was hosting some pods, this node is shown as NotReady in kubectl get nodes (this point is ok), but the pods stay saying "Running" on the poweroffed-node
<Zic> I'm sure I didn't have this behaviour on the fresh bootstrapped cluster
<marcoceppi> stokachu: thanks, I'll dump my ideas there
<stokachu> marcoceppi, cool man
<mbruzek> Zic: What problem(s) are you trying to solve by rebooting? What are else are you doing on the system to necessitate the reboot?
<Zic> I'm trying to test the HA and the resilience of the cluster (= what happened and during what time) before going prod
<Zic> with a fresh bootstrapped cluster, all pods hosted on a node passed "Completed" and finally disappeared and repop "Running" on another node
<Zic> now, they just stayed in "Unknown state"
<Zic> and as they are some variable I can't control like the disaster of friday, I cannot describe a clear step-to-reproduce without a full-reset I think :/
<mbruzek> Zic: We are looking into the problem here, trying to reproduce on our side
<Zic> thanks
<Zic>   11m11m1{controllermanager }NormalNodeControllerEvictionMarking for deletion Pod kube-dns-3216771805-w2853 from Node mth-k8svitess-02
<Zic> for example, this action last forever, the pod stayed in Unknown instead of switching to Completed and respawn somewhere else
<Zic> (in fact it pops somewhere and was in state Running, but the old one stayed in Unknown forever)
<mbruzek> Zic: And you have deployed 1.5.2 kubernetes right?
<Zic> yep, was friday :)
<mbruzek> I remember
<ryebot> Zic: Hmm, not able to repro.
<Zic> maybe I need to do a recap because I spoke so much, sorry :D 1) The first problem was some kube-system components and Ingress controller in CLBO because etcd units was rebooted too quickly (operations was in progress I think) because of large namespace deletion 2) Upgrading to 1.5.2 immediately fix the problem (I thought) 3) I rebooted just one node this weekend (planned to reboot all first, but as the first
<Zic> trigger problems, I stopped) and finished here
<ryebot> Zic: We might need some detailed reproduction steps
<ryebot> Zic: ack, thanks
<ryebot> Let me try rebooting etcd
<Zic> I'm sure if a do the same on a fresh canonical-kubernetes, I will don't have any of this issues; something must not totally recover from the previous problem at etcd level
<mbruzek> Zic: Some of your systems are physical, yes?
<mbruzek> Zic: We rebooted a worker here and did not have a problem coming back up
<Zic> yep, 5 of 8 kubernetes-worker
<Zic> all others components are VMs
<Zic> mbruzek: yeah, just after the first installation of the bundle charms, all this operation was OK
<Zic> it's since my last friday's incident, something must be partially working
<ryebot> Zic mbruzek: rebooted all etcd nodes, no problems coming back up
<ryebot> Pods and nodes all intact
<Zic> can I wipe the etcd-cluster to default data without tearing-down all the canonical-kubernetes cluster?
<Zic> (the infra, I don't mind to loose the settings of K8s cluster, I can redeploy my pods & services easily)
<mbruzek> Zic: We have not tested wiping out etcd, it holds some of the Software Defined Network settings.
<mbruzek> Zic: We are unable to reproduce  the failure you are seeing. It may be because of the manual operations you ran post deployment. Would it be possible to re-deploy the canonical-kubernetes-cluster entirely and start there?
<Zic> mbruzek: yeah, I think it's the only path now
<Zic> mbruzek: do I need to reinstall everything or can I do a clean teardown with Juju and re start from the beginning?
<mbruzek> Zic: As we spoke about on Friday lets take a snapshot of your environment now.
<mbruzek> Basically you need to use the Juju GUI to export the model of your environment now
<ryebot> Zic: To open the GUI:
<ryebot> Zic: If you haven't changed your admin password, run `juju show-controller --show-password` to get the randomly generated password
<ryebot> Zic: Next, run `juju gui`
<marcoceppi> ryebot Zic or just run `juju gui --show-credentials` ;)
<ryebot> marcoceppi: dangit, I always forget that
<ryebot> Zic: That'll start up the gui and give you a url to hit
<ryebot> Zic: Login with "admin" and your password, then look for the export button, which is at the top and looks like a box with an up-arrow
<Zic> ryebot: yeah, it's the step I followed to bootstrap the cluster successfully
<ryebot> Zic: Click that, and it'll download a copy of the model. We'd like to see it.
<Zic> it's the "teardown" part where I don't know what it's the best practice :)
<Zic> oh ok, I will do that now
<mbruzek> Zic: The up-arrow button will download the model in YAML representation. you can save it and it will help you deploy the same environment again in a repeatable fashion
<Zic> I wrote a more detailed step-to-not-reproduce-but-post-mortem: http://paste.ubuntu.com/23894187/
<Zic> mbruzek: do I need to reinstall the VMs and machine which host the cluster?
<jcastro> Zic: hey, just as an open invite, we'll be in Belgium next week if you feel like hopping on a train to talk face-to-face: http://summit.juju.solutions/
<mbruzek> Zic: The summit is free as in beer
<mbruzek> Zic: Because you used a mixture of Amazon and Manual Provider it may not be as easy a juju deploy bundle.yaml, but after you manually provision those physical systems you can deploy the bundle.
<mbruzek> Zic: Pastebin the model when you get that done
<Zic> mbruzek: Amazon machines was linked with the Manual provider also, I don't use the AWS credentials
<Zic> (our AWS instances is popped by Terraform for the anecdote)
<Zic> so all I need to do is 1) reinstall all OS 2) relink to the Manual provider of the Juju controller 3) redeploy the YAML I'm exporing, or the 1st step is useless?
<Zic> jcastro: will be happy to come, Belgium is not far away, I will try to discuss at our meeting if we can go with my company :)
<jcastro> bring as many people as you want too, it's a free event.
<Zic> http://paste.ubuntu.com/23894269/
<mbruzek> Zic: Why are you installing the OS?
<Zic> mbruzek: because we don't have MaaS and our homemade-installer have no connector to Juju (but lazyPower let me know that I can write one :))
<Zic> do I miss something?
<mbruzek> Zic: No, I just didn't understand your environment. I was about to tell you about MAAS but you already know.
<Zic> VMs and physical servers at our datacenter are auto-installed by a homemade-installer like MaaS (which autocomplete our internal Information System, registry, and some warranty support)
<Zic> for AWS, we just use AMI
<Zic> so I told lazyPower that maybe, in the future if we have more Juju infra I will install a MaaS
<Zic> or maybe start to write a connector for Juju if it's not so hard for my knowledge
<mbruzek> Zic: With that new information, it seems your steps are right. I was hoping to avoid having to reinstall the OS
<Zic> ok, I was asking about the reinstallation if Juju provides a clean way to teardown the cluster
<Zic> if not, not so important, I will just need to redo the manual-provider part, the reinstallation is fast and automatically done
<lazyPower> Zic - as they were manually enlisted, there's no clean way to tear it down, once you juju remove-application the machines will be left behind and still be dirty.
<mbruzek> Zic: so you can issue: juju destroy-environment <name>,
<Zic> oh hi lazyPower :)
<ryebot> Zic: From the sound of it, you don't need to reinstall juju
<ryebot> just remove your manual machines, reprovision, and add them back
<ryebot> *if you want to, though, go ahead :)
<Zic> :)
<Zic> I'm sure this new cluster will not have all this problems, as it was at the beginning by the way
<mbruzek> Zic: For reference: https://jujucharms.com/docs/stable/clouds-manual  If you add the machines in manually you can use the bundle.yaml file you just downloaded to redeploy on the right systems using the to: (machine number)
<Zic> it must be some sneaky things that came up with the friday's incident, even if I don't know what it is
<Zic> mbruzek: yup, I will re-add the machine in the same order (and will controll through juju status btw)
<Zic> it's what I did the first time to match charms with the hostname of machine (as we use predictive and rolenamed hostname)
<mbruzek> Zic: Looking at the last pastebin I see the variety of machines you are using, some of your workers have 20 cores and some have 2. You should be able to identify the systems by their constraints.
<Zic> mbruzek: do you think I can test to restore the etcd base with a backup before the friday's incident? or it's worse and I just don't need to spend that time?
<Zic> I don't know how Kubernetes manage a restore of etcd when there is a delta from what is currently running in term of pods, services... and what is restored in etcd
<Zic> (I had 3 Vitess cluster deployed when I did the etcd backup, I wipe all of them since then)
<lazyPower> Zic - that is a problem. your snapshot will not contain the TTL's on the keys, so you'll restore to whatever the state waas during that snapshot
<lazyPower> this may have implications on running workloads
<mbruzek> Zic: Here is what I would recommend. Redeploy this cluster, and once you get everything working and in good state take a snapshot of etcd data (before you do any non-juju operations)
<Zic> because I have two options : 1) etcd backups 2) all management parts of the cluster (easycharm, kube-api-loadbalancer, etcd and kubernetes-master, so all except kubernetes-worker) are snapshotted daily
<Zic> so I set back to the past all the management part, I don't know how the kubernetes-worker part will act
<Zic> I know that's complicating the problem istead of reinstall everything, it's just to know what can I possibly do if it was in production
<Zic> s/so I/so if I/ (did nothing actually for now :p)
<mbruzek> Zic: Technically I think you could do both backup etcd, and snapshot the Kuberentes controll plane
<mbruzek> Zic: the etcd charm has a snapshot and restore action provided in Juju you can run that at the same time you snapshot the control plane
<Zic> oh I didn't know, I did this backup action manually via crontab and etcdctl backup command
<Zic> so, I will try to set all the K8s control plane back to thursday, and see if from here, I can directly upgrade to Kubernetes 1.5.2
<Zic> mbruzek: if one of a component of charms (like etcd) appeared in APT's upgrades, do I must validate or hold this package?
<Zic> it's one of the first step leading to my disaster the last week
<lazyPower> Zic - thats a great question, and I should probably be pinning etcd if delivered via charm and release charm upgrades when the package is upgraded.
<Zic> don't know if it is for real, but was in the steps
<Zic> ok, I will pin etcd
<lazyPower> to unpin and rev teh etcd package
<Zic> I think that all this problems came from the uprading of etcd via APT *PLUS* the fact that I run larges delete operations of large namespaces just before, and maybe I don't wait enough
<Zic> (concerning friday) and concerning today, maybe some parts that are not working perfectly since then
 * mbruzek suspects that as well
<Zic> and it's not reproductible unless you can do the exact same delete operation and upgrade etcd at the wrong time like me :D
<Zic> as I said, before that, all my resilience and HA tests was perfect :)
<Zic> I thought I will go in prod quickly ^^
<Zic> it's not the first time etcd f*cked me up, in other technologies than K8S (or even Vitess, as lazyPower known), I know it's not your fault and I'm very happy of all the help you was able to provide me during this last days ;)
<Zic> lazyPower mbruzek ryebot: I successfully return to the previous state before the incident via my backupped snapshot of all the K8s controll plane, so I'm going to immediately upgrade to 1.5.2 and will redo my own step-to-reproduce
<Zic> I expect to... not reproduce my problem :)
<mbruzek> why upgrade? If you deploy new you should get 1.5.2 by default
<mbruzek> Zic: ^
<Zic>  Zic | I know that's complicating the problem istead of reinstall everything, it's just to know what can I possibly do if it was in production
<Zic> ^ just to test that
<Zic> I restored the VMs (which host master, etcd, apilb and easyrsa) of a ESX snapshot of wednesday
<Zic> (my cluster works perfectly at this date)
<Zic> I have just the upgrade to 1.5.2 to redo
<Zic> and I'm sure that my step-to-reproduce the problem will not work, as it seems to be tied with the etcd disaster of friday
<Zic> I can confirm, I can't reproduce my own previous problem \o/
<Zic> so it seems that something I did friday corrupt something (etcd I suppose) was the guilty part
<Zic> I just restore all management part to wednesday, reupgrade to 1.5.2, restore some kubernetes-worker and etcd... all seems fine
<Zic> the only difference is that I immediately upgrade to 1.5.2 before deleting my large namespaces
<Zic> and that I did not upgrade etcd through APT this time
<Zic> cc lazyPower ^
<lazyPower> Zic - thats good to hear. I'm going to circle back and file a bug if you dont beat me to it, against layer-etcd to pin the package or make it configurable.
<Zic> :) in my side, I will do a simple apt-mark hold etcd for this time
<lazyPower> i haven't had the pleasure of testing that scenario where etcd is upgraded out of band by an apt-get operation, so it may have been attributed to that, or it might have beena ttributed to broken key/val data in etcd due to the delete.
<Zic> a mix of the two I think, upgrade via APT during broken key/val operation
<Zic> it's the only part I didn't test in my step-to-reproduce
<lazyPower> yeah, thats crappy that we werent' able to recover from that though
<Zic> (I immediately upgrade, and then delete all my namespace)
<Zic> (thinking of the buffer problem of kubeapilb)
<lazyPower> Zic - i suppose moving forward, the suggestion is to snapshot your data in etcd, hten run the upgrade sequence. its going to stomp all over your resource versions doing the restore but its better to attain that prior state than to be completely broken.
<Zic> yep
<Zic> I will try to look precisely at the apt upgrade proposition also
<Zic> to not upgrade anything that's managed by Juju carms
<Zic> charms*
<Zic> lazyPower: the juju etcd charms does not include auto-backup right? do you think it's a good idea, as etcd-operator do it?
<Zic> currently I run the backup through a crontab on each etcd units, mbruzek told me that I can do the same with a juju action, I will go with that I think
<lazyPower> Zic - i'm open to a contribution for auto backups, but as it stand today its an operator action
<lazyPower> you can run that backup action in like a jenkins job and have it archive, and then you have an audit trail
<lazyPower> i get leery of automatic things that have no visibliity (like cron)
<Zic> personally I prefer to configure this type of backup on my own
<lazyPower> the last thing i want is to assume its working, by wrapping it in a CI subsystem you have trace logs and know when it fails
<Zic> but as Juju is here to help, it sounds like a good feature :)
<Zic> haha, yeah, I can uderstand that part :)
<lazyPower> and the whole juju action part ensures its repeatable :)
<Zic> even if etcd-operator do the job, I will actively monitor what he does
<lazyPower> plus those packages are what iv'e tested for restore, its effectively teh same thing, but i'd hate to think adding an extra dir to the tree or something would cause the restore action to tank.
<lazyPower> and then its added gas to the fire
<lazyPower> metaphorically speaking anyway
<Zic> for now I do the both : daily snapshots of the VMs which host etcd units + etcdctl backup command
<lazyPower> thats a good strategy
<lazyPower> +1
<Zic> lazyPower: do you have any docs on how Juju and MaaS is connected, code-sided?
<rick_h> Zic: what do you mean?
<Zic> I will lurk at if it's valuable for us to develop a connector for our own installation/provisionning infra or deploy a simple MaaS for Juju architecture
<Zic> rick_h: oh hello, about this thing ^
<Zic> we have a kind of MaaS which is connected to all our services in my company
<Zic> it's an homemade system and I don't know if I can write a simple new "provider" for Juju, or if I just go to MaaS
<Zic> (will be a little redundant)
<rick_h> Zic: check out https://github.com/juju/gomaasapi and https://github.com/juju/juju/tree/staging/provider/maas
<Zic> rick_h: thanks
<Zic> mbruzek: hmm, the juju debug-log is kinda flooding since the new upgrade to 1.5.2 : http://paste.ubuntu.com/23894873/
<Zic> (I use the juju upgrade-charms command)
<Zic> lazyPower: I'm reposting as you were offline : the juju debug-log looks really strange since my new upgrade to 1.5.2 : http://paste.ubuntu.com/23894873/
<lazyPower> Zic - most of that is normal. the leadership failure - if it continues to spam we'll want to get a bug filed against that
<Zic> it floods in loop :)
<lazyPower> thats the unit agent complaining about a process it needs to do leadership stuff for coordination. not related to teh charms however
<lazyPower> Zic - i'm headed out for lunch will be back in a bit
<Zic> hmm, I didn't do a kubectl version after juju upgrade-charm, I just take a look at juju status but I'm always on 1.5.1 in fact :o
<Zic> (I confirm I stayed in 1.5.1 after the juju upgrade-charms, I was just looking at juju status to all return to green and didn't look at the application version -_-)
<stormmore> how juju world
<stormmore> howdy*
<lazyPower> Zic - so you're saying juju upgrade-charm on the components either didn't run, or the resource was not upgraded?
<lazyPower> sorry for latency, i'm at terrible coffeeshop free wifi
<Zic> lazyPower: the upgrade-charms command just changed to the lastest version of the charm, but the application was not upgraded
<lazyPower> Zic - that seems strangely reminiscent of another user reporting at deploy time they didn't get the upgraded resource
<lazyPower> this is recoverable
<lazyPower> on the store display, you can fetch each resource and manually attach them to upgrade the components.
<lazyPower> we actually just landed a doc update about this, 1 moment while i fetch the link
<lazyPower> Zic - https://github.com/juju-solutions/bundle-canonical-kubernetes/pull/197
<Zic> the upgrade performs well the first time with the same cluster, don't know what happen :(
<Zic> does the order of what charp is upgraded via upgrade-charm count?
<Zic> charm*
<Zic> oh I know what happen
<Zic> in 1.5.1 Flannel does not start properly
<Zic> I forgot to restart them :}
<lazyPower> in juju status, if you ran the upgrade-charm step, you should still see 1.5.2 listed as your k8s component versions
<lazyPower> assuming it went without error. if there's a unit(s) trapped in error state that are related, its possible that the upgrade hasn't completed
<Zic> does the upgrade-charm start if Flannel is in error?
<Zic> ah
<lazyPower> if the charm is in error, it will halt the operations on related units
<lazyPower> until the error is resolved
<lazyPower> ok lunch is over for me, heading back to the office and will resume then Zic.
<lazyPower> o/
<mbruzek> Zic: I am back.
<mbruzek> Zic: What is the current issue?
<Zic> mbruzek recap: I restored my old cluster to wednesday, I ran a "juju status", all was green, I ran juju upgrade-charms on easy charms, I did another juju status at the end and all was green, and the "Rev" column contains the latest version of the charm, but in fact, the software version was always 1.5.1 for kubernetes-master/worker for example. I remembered that Flannel don't start well on boot sequence on
<Zic> 1.5.1 lately, started it on every node and the upgrade was unblocked. The only weird thing is that Flannel was shown as "active/green" in juju status so...
<Zic> so all is fine actually, was my mistake with Flannel not autostarting well on old 1.5.1
<Zic> (and juju status which show me as active/green the first time)
<mbruzek> Zic: Yes we fixed the flannel restart issue in 1.5.2 so I am confused why flannel didn't restart.
<Zic> mbruzek: the restaured cluster was in 1.5.1
<Zic> restored*
<mbruzek> ah
<mbruzek> OK
<Zic> :)
<Zic> apparently, Flannel not started was blocking the upgrade
<mbruzek> But everything has started now?
<Zic> yep
<Zic> it's my fault, when the cluster was restored to wednesday/1.5.1, I just did *one* juju status, all was green
<Zic> normally I run a watch -c "juju status --color"
<Zic> I didn't see between the first "juju status" and the upgrade that Flannel passed in "error/red"
<Zic> and that's apparently what's blocked the upgrade as after Flannel was manually started, the upgrade begins instantly
<Zic> lazyPower: TL;DR : it was Flannel (of the 1.5.1 version) which blocked my upgrade :)
<lazyPower> ah good to know
<Zic> it's OK now
<lazyPower> i wish we could retroactively fix that
<lazyPower> #fwp
<mbruzek> right!
<Zic> was my fault also as I just run onetime juju status, showed all green, then upgrade-charms, see it did not nothing at the juju debug-log, re-run a juju status, show that Flannel is in error... and remembered that on wednesday (date of the snapshot) I was still in 1.5.1 with the flannel's issue :p
<Zic> normally I monitor the upgrading-charm process through a watch -c "juju status --color" :p
<Zic> s/it did not nothing/it did nothing/
<Zic> double-negation is dangerous.
<Zic> IN CONCLUSION (sorry for the caps), I have a good-running 1.5.2 cluster, no CLBO, PodsEviction works well if node goes down...
<lazyPower> \o/
<lazyPower> yay
<Zic> ... well, the only last point is, can I come to the Juju Summit? :D
<Zic> I will discuss this with my company :)
<lazyPower> its a freebie event, and you're invited and can bring more
<lazyPower> so, load up the possee and meet us in ghent :)
<mbruzek> Zic: Yes you are most welcome to join us
<Zic> you will hear my perfect^WFrench accent \o/
<Zic> hmm, just a real last point: http://paste.ubuntu.com/23895203/
<Zic> all is up-to-date isn't it? I have a doubt for k8s-master which say 1.5.1
<Zic> (because: ERROR already running latest charm "cs:~containers/kubernetes-master-11" if I try again the juju upgrade-charm kubernetes-master)
<Zic> kubectl version return 1.5.2
<Zic> it seems to be a display bug SSHing directly to the master, all composants are in 1.5.2
<Zic> s/bug/bug./
<Teranet> Question how can I FORCE destroy something on juju I have 6 container which will not go away
<mbruzek> Teranet: You want to destroy everything juju?
<Teranet> yes
<Teranet> so I can redeploy from scratch
<mbruzek> Teranet: OK here is the command, but you should be careful with this.
<mbruzek> juju destroy-controller <name> --destroy-all-models
<mskalka> teranet: juju kill-controller <controller-name> will tear it all down, including the controller node
<Teranet> ok let me see if that works
<Teranet> thx
<jcastro> Teranet: https://kubernetes.io/docs/getting-started-guides/ubuntu/decommissioning/
<jcastro> covers everything
<jcastro> oh sorry, thought you were using kubes
<Teranet> nope but it's ok
<jcastro> the Cleaning Up the Controller part at the bottom should still apply
<mskalka> I would try to remove the model first though with juju destroy-model <model-name>
<mbruzek> https://jujucharms.com/docs/2.0/controllers
<Teranet> I use my own private cloud and had broken relationship which broke my complete nove enviroment
<Teranet> juju destroy-model I had ran and it was stuck
<Teranet> I had false applied a HA relactionship which resulted in destroying all my compute nodes :-(
<Teranet> lucky I hadn't deployed VM's in Openstack yet
<Zic> lazyPowe_ mbruzek: hmm, the two additionnal (scale from 3 to 5) etcd members seems unhealthy in etcdctl cluster-health
<Zic> http://paste.ubuntu.com/23895436/
<Zic> (after an upgrade of the charm)
<Zic> I just restarted the etcd service via systemctl and all is fine
<Zic> (just to let you know if it's a known issue)
<Zic> all node are healthy after that
<lazyPowe_> Zic - seems like it might have raaced, i haven't seen any test failures doing scale testing
<lazyPowe_> and there's logic ot help prevent that in the charms
<Zic> hmm I spoke too quicky, seems the restart does not suffice, it is unhealthy again, some etcd logs: http://paste.ubuntu.com/23895453/
<Zic> I have just this problem on my 04 and 05 etcd node
<Zic> hmm, seems just a bit of flapping : http://paste.ubuntu.com/23895464/
<Zic> they are all healthy again
<kwmonroe> cory_fu: petevg:  if matrix (or any other project) depends on juju-plugins, the setup.py PR (https://github.com/juju/plugins/pull/75) would make the crashdump PR (https://github.com/juju-solutions/layer-cwr/pull/46) unnecessary, right?
<petevg> kwmonroe: matrix doesn't depend on it.
<petevg> kwmonroe: cory_fu pushed back on that, and I think that he's right. crashdump needs python2's yaml, and matrix just handles python3 stuff.
<kwmonroe> ok, well let's forget matrix for now petevg.. should juju-plugins be some kind of packaged citizens?
<petevg> kwmonroe: yes. I think that we should merge the PR. I'm a little biased, though, on account of it being my PR :-)
<cory_fu> kwmonroe: Yeah, if we can update crashdump to work in py3 (i.e., if that one bug is fixed upstream), then we could perhaps make plugins a dep for matrix.  But I also kind of like having it as optional functionality that works if you have the lib installed and is otherwise a no-op
<cory_fu> An optional dep, if you will
<petevg> Somebody does need to go and clean up the merge conflicts, though.
<cory_fu> kwmonroe, petevg: +1 to packaging juju-plugins for easier install.  Could also be a snap
<petevg> cory_fu: yeah. No matter what, matrix shouldn't fail if crashdump doesn't exist.
<kwmonroe> petevg: if only there were a recently gung-ho ~charmer that could propose a clean PR...
<petevg> kwmonroe: yeah. It's on my list o' things to do today, once I finish running this double set of tests, where I'm confirming that I'm telling the truth about matrix running with an without the crashdump libs :-)
<kwmonroe> my beef is really that the juju-plugins readme says "clone this repo", and that's not enough
<kwmonroe> .. for runtime
<kwmonroe> .. sometimes
<petevg> Yeah. Adding the repo to your PATH is adequate, but not pretty. :-)
<kwmonroe> hey!  ^^ that's a java slogan right there, right mbruzek?  for runtime, sometimes?
<petevg> Heh.
<mbruzek> hi
<kwmonroe> except it's not petevg, doesn't crashdump need pyyaml at runtime?  nothing about cloning that repo and adding to the path helps you there.
<petevg> kwmonroe: true. nm
<petevg> kwmonroe: I will ping people when I have the new nice PR :-)
<kwmonroe> very fine petevg -- fwiw, i'm really trying to say that j-p is all growed up and it's time to consider which format to deliver it in.
<petevg> Cool :-)
<kwmonroe> your keyboard says smiles, your subtext says ugh.
<petevg> Read into stuff much? :-p
<kwmonroe> you did it again!
<petevg> It is in Python2 still. Silly bug.
<cory_fu> kwmonroe: Stop arguing about it and create a snap.  ;)
<kwmonroe> 90 seconds and i have no retort.  you win this round cory_fu.
<cory_fu> :)
<Teranet> ok quick question I had a host deployment failure it timed out in the bios setting how can I initiate a redeployment to this box  ?
<Zic> ryebot / mbruzek / lazyPowe_ : just a final word: I did all my resilience and HA test and this time, all is working
<mbruzek> Zic: great
<ryebot> Zic: Awesome!
<lazyPowe_> Zic awesome, glad you kept at it and had positive results :)
<lazyPowe_> ^5
<Zic> the customer of this architecture leaves an old VMware ESX platform, these robust Hosts machines will be added as kubernetes-worker at term :)
<Zic> EC2 instances are just from popping near their own customer, in their country for each endpoint
<Zic> I will keep you updated as a testimonial of how good CDK will do the work for the coming-launch :)
<Zic> at term, this cluster will run a 3 Vitess cluster, Cassandra/Spark/Zeppelin, some Nginx and php-fpm7
<lazyPowe_> thats a nice spread of workloads
<lazyPowe_> got some presentation layer, some app layer, some business intelligence in there, and i dont know what vitess is but i assume its funky vegetables
<Zic> lazyPowe_: you are on the Vitess' Slack, don't you?
<lazyPowe_> Zic - negative, i'm on 7 slacks but that is not one of them
<Zic> ah, I crossed you at the K8S Slack :)
<Zic> (for a problem about Vitess, and so, I was invite to the Slack of Vitess, I didn't remember where we crossed :))
<Zic> http://vitess.io
<Zic> it's the way that YouTube use MySQL in their infra, especially in Kubernetes/Borg
<lazyPowe_> ahhhh ok
<lazyPowe_> bookmarked for later reading
<petevg> kwmonroe, cory_fu: PR for you https://github.com/juju-solutions/matrix/pull/73
<Zic> to a totally another subject, my colleague saw the OpenStack Juju bundle charms
<Zic> it's pretty... dense :)
<mbruzek> Zic there are  lots of applications in OpenStack
<mbruzek> Zic: but one can upgrade through the releases of OpenStack easily with those charms.
<Zic> (he saw me do some drag'n'dropping at the Juju GUI and when he visits the jujucharms.com website, he saw the openstack bundle)
<Zic> mbruzek: yeah, he is interested as we have also a PoC for OpenStack and... it was not so concluent
<Zic> if you want to build OpenStack from scratch on your own, and maintain this infra, it costs a lot of time, especially at the beginning when you're alone
<Zic> (before the infra is up, running and... documented :))
<Zic> I think he will try the OpenStack bundle charm :D
<Zic> going to sleep anyway, I'm at extra-unofficial-hour for too long :)
<marcoceppi> Zic: I totally recommend it, I know lots of 1-3 people teams that maintain openstack in production with juju
<Zic> yeah, as my own experience with Juju now, I can recommend it for another technologies :)
<Zic> we mainly use Puppet here as our configuration-management tool, sometime Ansible for particular work...
<Zic> as K8S module for Puppet does not exist and will be a headache to maintain by ourselves, I run through Juju :)
<Zic> (kubeadm first, then I discovered CDK via Juju)
<Zic> the second lesson I learn from Juju/K8S/Vitess is that I must learn Go someday :p
<Zic> more and more techno that I used are written in Go
 * Zic said he is going to sleep 10min ago, too talkative, g'night
<marcoceppi> cheers o/
<stormmore> so how I go about enabling elasticsearch and kibana for logging on my CDK cluster?
<narinder> thedac, were you able to talk to jonh from CPLANE today
<narinder> ?
<thedac> narinder: yes
<stormmore> I keep coming across documents that talk about exporting a couple of env vars before bringing up a k8s to get it to spin up elasticsearch and kibana pods. does anyone know how to get those pods deployed in a pre-existing cluster easily?
<cholcombe> was add-metric ever added to the charmhelpers?
<cholcombe> cmars, ^^
<cmars> cholcombe, no, i don't think it was
<cholcombe> cmars, we should get that fixed so i don't have to keep calling subprocess.check_output to add metrics :)
<cmars> cholcombe, could do, yeah
<cmars> cholcombe, where is the charmhelpers project?
<cmars> still on LP?
<cmars> hmm, seems so. sure, i'll look into this
<cholcombe> cmars,  https://code.launchpad.net/charm-helpers
<cholcombe> yeah
<cholcombe> cmars, i'm working through a PR on gerrit and people are asking why I have to subprocess call to a juju function
<cmars> cholcombe, it'll probably take all of a couple minutes to write, the rest of the day to document and test :)
<cholcombe> lol yup
<cholcombe> cmars, sorry to be a pain in the butt
<cmars> cholcombe, :) its fine, i'm just complaining
<cmars> needs to be done.. layer:metrics isn't terribly efficient
<marcoceppi> stormmore: hey, so you can either deploy elastic search and kibana on your cluster, or you can deploy elasticsearch/kibana/beats along side it
<stormmore> marcoceppi at the moment I am thinking of on the cluster to minimize the number of "machines" in use. still in the process of architecting a bigger bare metal cluster
<stormmore> marcoceppi I already have a small k8s cluster deployed though
<marcoceppi> stormmore: makes sense. I don't have much experience in doing that, but you should be able to follow any online guide that walks through elastic on k8s
<cholcombe> cmars, i'd use it but the ceph charms haven't gone layered yet
<marcoceppi> stormmore: I can't confirm, but this one looks promising: https://github.com/kayrus/elk-kubernetes
<marcoceppi> it's atleast been recently updated
<stormmore> marcoceppi and that is where the problem lies, it seems to assume that you are not adding it but enabling it before bringing up the cluster... from what I can see it is an addon at this point
<stormmore> https://kubernetes.io/docs/user-guide/logging/elasticsearch/
<stormmore> lazyPowe_ are you around? do you have any input into adding elasticsearch & kibana pods to kube-system?
<lazyPowe_> stormmore - our integration point was an external logging deployment using the beats-core bundle as a foundation for that effort
<lazyPowe_> the idea is that if your k8s cluster is sick, you'd want some persistence around that data, and have it be accessable regardless of the kubernetes system sttate
<lazyPowe_> so it uses beats to ship the data over, and then gets parsed-reinterpreted by the kibana dashboards
<stormmore> lazyPowe_ hmmm insteresting considering conjure-up docs suggest that it deploys 2 elasticsearch pods and a kibana one
<lazyPowe_> wat
<lazyPowe_> when did this happen?
<lazyPowe_> stokachu - wat?
<stormmore> https://insights.ubuntu.com/2016/11/21/conjure-up-canonical-kubernetes-under-lxd-today/
<lazyPowe_> ooooo
<lazyPowe_> this isn't the conjure-up docs or prompt, this is the upstream k8s guide
<lazyPowe_> right, at this time, teh beats core bundle was part of CDK
<lazyPowe_> its now an ancillary bundle, pending our v5 update of the elastic stack components
<stormmore> "conjure-up kubernetes" vs "conjure-up canonical-kubernetes"?
<lazyPowe_> that work thats there still works and functions as it did then, but it could be better with the v5 updates as there were a ton of fixes and normalized versioning schema and etc.
<lazyPowe_> stormmore - so in short, what you get today with canonical-kubernetes, is much more aligned with a smaller deploment, and you can then add the beats components and relate it all. we have a todo to get another bundle published since we moved to the fragments, but we're holding off until the v5 rev of the elastic stack iirc
<lazyPowe_> stokachu - un-wat, miscommunication
<lazyPowe_> stormmore - i'll take a line item to bring this up with the team about seeing if we can get you an elastic-enabled bundle tomorrow
<lazyPowe_> most of the team has left, im' sticking around a little bit longer to check on this deployment i'm running, and then i'm out for the evening as well
<stormmore> lazyPowe_ awesome, no worries
<lazyPowe_> i would build you one now, but that would be a "throw it over the fence good luck i'm behind 7 proxies" kind of thing to do
<lazyPowe_> i'd rather at least run a test deployment before i put it in your hands
<stormmore> lazyPowe_ I am just trying to make sure I have everything in place so dev doesn't need access to the nodes and have a UI to get the logs from
<lazyPowe_> yep, totally understand that
<lazyPowe_> why give them admin when read-only works
<lazyPowe_> have you been looking into RBAC k8s primitives perchance?
<lazyPowe_> those seem like they are going to be right in your wheelhouse
<lazyPowe_> you can assign roles to namespaces and scope what primitives they can interact with
<lazyPowe_> rather roles to users, in a namespace, and ....
<lazyPowe_> see above
<lazyPowe_> stormmore - https://kubernetes.io/docs/admin/authorization/
<lazyPowe_> we haven't fully enabled this yet as its currently in BETA
<lazyPowe_> but you'll def want ot read up on it and when we land teh feature set in the charm to make that configurable, you'll be in container-topia
<stormmore> yeah exactly
<cmars> cholcombe, here you go, wasn't nearly as bad as i thought :) https://code.launchpad.net/~cmars/charm-helpers/add-metricenv/+merge/315952
<cholcombe> woo
<cholcombe> cmars, nice. i forgot about the JUJU_METER thing
<cmars> i should write more python tests for my charms... mock.patch is pretty easy to work with
<cmars> gotta run now. if you could help me get this landed, or reviewed -- happy to fix things up however -- i'd much appreciate it!
<cholcombe> cmars, sure.  i can review it but i can't land it
#juju 2017-01-31
<stormmore> lazyPowe_ just for my knowledge and if you don't mind me know, what timezone does your team work in
<lazyPowe_> stormmore most of us are US Central time
<stormmore> lazyPowe_ good to know :)
<stormmore> lazyPowe_ I am one of those weird people that works in whatever timezone is most suitable for the task at hand ;-)
<lazyPowe_> stormmore - i hear ya. As a traveling monk of the python order, i tend to shift my schedule around but normally work CST hours.
<stormmore> lazyPowe_ oh I am just a BOFH ;-)
<lazyPowe_> I was told i had to quit sporting that title
<lazyPowe_> and stop deleting my $lusers $HOME
 * stormmore will never stop sporting that title! computers would be so much better without their users! 
<lazyPowe_> welp, have a good evening. Things are looking good here
 * lazyPowe_ dips out for the night
<anrah> Hi all! Is there a way to add another network interface for a unit deployed to OpenStack?
<BlackDex> anrah: deployed via juju? bare-metal? using maas? what version of juju?
<BlackDex> what version of ubuntu?
<kjackal> Good morning Juju world!
<miken> axw: Hi there. I've just added a comment on https://bugs.launchpad.net/juju/+bug/1643430 . Interested if you know of any other workaround... (it can wait 'til tomorrow though)
<mup> Bug #1643430: Unassigned units in an error state cannot be removed <juju:Triaged> <https://launchpad.net/bugs/1643430>
<Budgie^Smore> Morning! D*$n it is already :-/
<miken> Morning kjackal
<axw> miken: I'm not aware of any workaround at the moment, sorry
<axw> miken: I mean, apart from getting access to the controller and hacking the mongo DB
<anrah> BlackDex: I mean that I am not deploying openstack but deploying own charms to existing openstack
<anrah> BlackDex: 16.04 and juju 2.0.2
<BlackDex> miken: what version of juju?
<BlackDex> anrah: using maas? or juju local with juju add-machine?
<miken> axw: Are there details of doing that on another bug? I could ask those with access to the controller to do so.
<miken> BlackDex: 2.0.2
<BlackDex> hmm, i know of a tool for juju 1.25 called mgopurge, don't know if that can be used for 2.0.2
<BlackDex> miken: https://github.com/niedbalski/fix-1459033
<axw> miken: not that I know of. mgopurge is for fixing broken transactions, that wouldn't help in this case.
<BlackDex> ow now
<BlackDex> no wrong one
<BlackDex> or that could also be a fix
<BlackDex> maybe
<BlackDex> but only for 1.25.2
<BlackDex> and the mongo cleanup is https://github.com/niedbalski/fix-1613866
<BlackDex> but, i don't know if that works for 2.x
<axw> miken: don't try and remove the model (yet)
<miken> Thanks BlackDex
<miken> axw: oh, let me update that RT then :)
<miken> axw: Did you want access to it, or what's possible?
<axw> miken: if you destroy-model then it gets into an even worse state. if you restart the controller agent, it should try to assign the unit to a machien again
<miken> Ah - thanks. I'll note that.
<axw> miken: sorry, should have said before - memories are fading back in
<anrah> BlackDex: Not maas, OpenStack is my cloud
<BlackDex> maas is not a cloud ;) it's a bare-metal provisioner
<anrah> BlackDex: well yeah :) But anyway just to OpenStack
<BlackDex> but if i'm not mistaken, juju using lxd on 16.04 would add all bridged interface to the lxd's
<BlackDex> if not, then you probably need to change the template/profile of the lxd containers juju created
<anrah> I'm not using LXD Juju just provisions new instances to OpenStack
<BlackDex> you mean using openstack as a cloud provider?
<anrah> Yes
<anrah> on my model config I say: network: 5c7cd500-c581-4491-86fa-af95a71e8c18
<anrah> basically i want another network to my model and from there to my instances
<BlackDex> i haven't done that much with juju using openstack as a cloud provider
<BlackDex> i would guess that creating a network and adding the id of that network would be enough
<anrah> Yes, that works for one network, but the need is basically to separate the mgt and data planes
<anrah> So for external communications the instances would use interface X and for management (ssh + other stuff) interfaxe /
<anrah> *interface Y
<anrah> I can do that by manually adding another network after the instance is running but I was wondering is there a way to do that with Juju
<BlackDex> then i think you have to look into the spaces for this
<BlackDex> https://jujucharms.com/docs/2.0/network-spaces
<CoderEurope> Any time anywhere - twitter works on my phone http://imgur.com/v1sOmkB
<magicaltrout> that tweet is a lie
<magicaltrout> you don't see the london eye from canonical's office! :P
<anrah> BlackDex: yep, currenly only for MAAS, Have to figure out something :)
<Zic> hi here, maybe it's an "RTFM" question: how can I check, without actually doing any action, if upgrades are available for all charms that I'm using in one juju command?
<marcoceppi> Zic: `juju status` should show if upgrades are available at the top of the status output
<marcoceppi> Zic: I could also write you a real quick `juju show-upgrades` plugin, because I don't think it's as apparent
<stub> magicaltrout: Not now, but the previous office had a great view of it.
<Zic> marcoceppi: oh I didn't notice for juju status, thanks
<surf> what is the difference between model and controller in JUJU?
<Zic> hmm, I think EasyRSA did something wrong here: http://paste.ubuntu.com/23899101/
<verterok> good morning
 * verterok gets coffee and reads the backlog
<Zic> if someone of the CDK team is around?
<marcoceppi> surf: a controller is a special model which runs the Juju control plane that exposes the GUI, API server, and dispatches event in the deployment
<Zic> lazyPower: do you believe me if I said that I have another problem and was spotting you come up? (hello anyway ^^)
<rick_h> Zic: lol, why does that sound like "I'mmmmm baaaaacccckkkk!" in my head
<Zic> huhu :)
<Zic> I'm back to haunt
<Zic> lazyPower: when you will have a little time: http://paste.ubuntu.com/23899271/
<Zic> sorry in advance
<lazyPower> Zic it appears that the etcdoperator deployment has left some garbage behind, and potentially changed some tls certificates. I'm not positive which as i haven't used etcd-operator
<Zic> oh, don't know it was possible, but as it was the only thing I did to encounter this error...
<Zic> do you think it's recoverable? or do I must restore to an old snapshot?
<lazyPower> well you're getting TLS errors in your log spam here - Jan 31 12:13:18 mth-k8smaster-01 kube-apiserver[1177]: E0131 12:13:18.033706    1177 handlers.go:58] Unable to authenticate the request due to an error: crypto/rsa: verification error
<Zic> yeah but don't know that etcd-operator will touch that part
<lazyPower> try dumping your kubernetes objects and seeing if it left something behind you can delete
<lazyPower> i imagine it added something to the k8s object store and thats whats causing the error
<lazyPower> but thats just a guess
<Zic> lazyPower: I can't run delete command over kubectl anymore btw :(
<lazyPower> i'm really baffled at how this keeps happening.
<Zic> to reassure you, before all this exotic stuff (Vitess & etcd-operator) all was working fine (like an ElasticSearch cluster or a Cassandra one)
<Zic> I was waiting your advice before return to a previous working state
<Zic> but then, I will avoid etcd-operator
<lazyPower> my thoughts are to check your tls certificates with x509 validation to ensure you have the correct IP Addresses
<lazyPower> including the SDN address
<lazyPower> Zic - another thought would be to deploy the kubernetes-e2e charm, and run an e2e validation suite post restore/fix to ensure the cluster is behaving as we expect it to
<lazyPower> Zic - https://jujucharms.com/u/containers/kubernetes-e2e/
<Zic> oh, I was searching this kind of solution
<lazyPower> Zic - i think its probably fine to use etcd-operator, but we need to know what its doing
<Zic> it's like a conformity tools for CDK?
<lazyPower> and then account for anything its done
<lazyPower> yeah, e2e is written by google + contributors to validate the k8s deployment behaves as expected
<Zic> sounds cool
<lazyPower> it runs very complex scenarios in kubernetes automatically, and generates quite a bit of load durin its testing suite
<lazyPower> and at the end will report any errors it discovers during the test run. we run this daily on CDK and publish the results to gubernator (their upstraem dashboard)
<lazyPower> https://k8s-gubernator.appspot.com/builds/canonical-kubernetes-tests/logs/kubernetes-gce-e2e-node
<Zic> lazyPower: all my certficates have the good names in it (tested via openssl x509) and they was not modified since the 16th of this month
<lazyPower> seems like its something else thats caused the problem. did etcd-operator leave anything behind in the kube-system namespace?
<Zic> nope, I search through RC, deployments, pods, PV and PVClaim, statefulset, thirdpartyressources, servicess... in --all-namespaces
<Zic> lazyPower: I think I'm done with etcd enough, Vitess is capable to use an etcd cluster or ZooKeeper, I'm more confident with ZK, I think I will switch to it
<Zic> and more, I will let you working :D
<Zic> we passed so many hours together, I don't have enough pizzas to send to you!
<lazyPower> :D
<lazyPower> at this point i'd be happy with a pint and a pack of gum :D
<lazyPower> however i'm bummed you ran into so many errors that caused cluster crash, there's obviously something going on with that last deployment (i presume) that has altered the state of the cluster. I'm also wondering if you're using NTP on your servers to ensure there's no clock drift?
<lazyPower> i know that clock skew can cause some weird issues here and there to crop up
<kwmonroe> rick_h: reminder received (wrt juju ci show-and-tell tomorrow)
<rick_h> kwmonroe: you ok for it?
<rick_h> kwmonroe: and thank for the ack
<kwmonroe> you bet! what time is the show tomorrow?
<rick_h> kwmonroe 2pm est
<rick_h> kwmonroe: I'll get you an invite
<kwmonroe> thx rick_h
<mskalka> marcoceppi, Are you available for a minute? Also do you ever respond to pings with 'Polo!'?
<marcoceppi> mskalka: Polo, I mean no - never.
<mskalka> so I'm charming up Rocket.Chat just to get familiar with the reactive framework, and I've run into a roadblock, namely the MongoDB charm.
<marcoceppi> mskalka: how can I help?
<mskalka> after some poking around I see you have a layered version in the works, but it's not completed. I'm interested in pushing it forwards a bit, what's a good place to start?
<marcoceppi> mskalka: we need to figure out replicasets, it's a bit out of my depth, but this is the latest layer: https://github.com/marcoceppi/layer-mongodb
<marcoceppi> mskalka: I also have this fork for the mongodb interface: https://github.com/marcoceppi/interface-mongodb which adds new support for a peer
<marcoceppi> mskalka: basically, when a charm is deployed, the leader should init the RS, then for each peer added the charm should check "am I the master of the RS" and if so, add the new unit to the RS
<marcoceppi> I just can't get it to work for me, for the life of me
<mskalka> ok. I'll look into how the old charm handles it, maybe I can suss it out
<marcoceppi> mskalka: thar be dragons in the old charm
<mskalka> thar be dragons in most pre-reactive charms ;)
<marcoceppi> truth
<mskalka> alright that's a good a place to start as any, thanks!
<marcoceppi> mskalka: I'm about to relocate, but I'll be online in a bit again, I'm happy to help work through it.
<mskalka> marcoceppi, sounds good, if I run into a major roadblock I'll ping you
<mbruzek> lazyPower: Do you have some of the images for older kubernetes? I am working on my presentation and I don't know where to get our older images
<marcoceppi> mbruzek: I have some
<lazyPower> mbruzek i do, hang on while i fish up a lightning talk slide
<bdx> anyone interested in Juju <-> newrelic integration this would be a great time to make a ticket with them for a python sdk, I've started the commotion see http://i.imgur.com/dvPlFcb.png
<mbruzek> Thanks guys
<lazyPower> or marcoceppi can totally swoop in
 * marcoceppi HAWK SCREEEEECH
<mbruzek> murica
<marcoceppi> bdx: yassssss
<lazyPower> oh i totally lied - https://docs.google.com/presentation/d/1m69DG957JK9PMCEXNFL80_0DFXBfQbfIEB6obAK6HPY/edit#slide=id.g70d4533c6_0_22
<lazyPower> only has one slide of k8s formation(s)
<lazyPower> pre circle icons to boot
<mskalka> marcoceppi, OK, I have a good idea of what's happening in the old Mongo charm (or a horribly myopic idea, we'll see) can we discuss later this afternoon when I've had a change to poke at it's ugly innards a little more?
<Zic> did you already have Flannel starting before the network target? I observe this only on my baremetal server where I use bonding interface
<lazyPower> Zic - ts started after the network-online.target is reached  https://github.com/juju-solutions/charm-flannel/blob/master/templates/flannel.service
<stormmore> hola juju world! :)
<stormmore> lazyPower should there we use a NTP charm as a subordinate on a cluster?
<lazyPower> stormmore - indeed
<lazyPower> stormmore https://jujucharms.com/ntp/
<stormmore> lazyPower I am aware of that one, related to what though in the k8s model?
<lazyPower> kubernetes-master, kubernetes-worker, etcd
<lazyPower> we should probably include that in the bundle at some point in  the near future
<stormmore> btw when I actually get my hardware, etc. setup, I plan on having a model just for "testing" the bundles, etc.
<stormmore> lazyPower that is what I was thinking, I know it is part of the openstack bundle
<lazyPower> stormmore - yeah, thats a great idea to have a separate model for testing things like upgrades and what not before you execute against a production cluster
<lazyPower> despite our best efforts, we're going to try to test every scenario but we'll find corner cases that are rough until we've gotten a few hundred upgrades under our belt and shaken all the bugs out of that path
<lazyPower> i'm sure Zic will testify :)
<stormmore> lazyPower - exactly my thoughts, plus being able to help with developing things along side you guys :)
<lazyPower> <3
<Zic> lazyPower: I'm testifying everything lazyPower said <3
<Zic> (or, near that)
<rick_h> marcoceppi: do you know anyone that knows the nagios charms well I can bug?
<tvansteenburgh> rick_h: what do you make of this? http://pastebin.ubuntu.com/23901081/
 * rick_h goes ruh roh
<rick_h> tvansteenburgh: looks like they up'd the version of the terms to 1 and you haven't agreed to tht yet
<rick_h> tvansteenburgh: try juju agree but with a /1 vs a /0?
<tvansteenburgh> rick_h: right, but /1 isn't listed in the charm's terms
<rick_h> tvansteenburgh: oic, hmm
<tvansteenburgh> rick_h: also, it seems i'll never be able to agree to /0 anyway, according to this: https://github.com/juju/juju/blob/afeb62dd9f750437a97ffbf275b1d1524836d513/cmd/juju/romulus/agree/agree.go#L95
<rick_h> tvansteenburgh: that's fishy there..
 * rick_h is trying to see if he can list the terms for that team with charm terms -u xxx
<rick_h> tvansteenburgh: ok, so sounds like a question/bug for cmars
<rick_h> tvansteenburgh: I can agree to /1 and get the terms listed/etc
<rick_h> tvansteenburgh: but I also see I cannot agree to /0, but that's what's listed in charm terms -u ibmcharmers
<mskalka> marcoceppi: *puts on swim trunks* Marco?
<rick_h> tvansteenburgh: so it sure seems like the UX is trying to increment to revision to non-zero so that it doesn't do 0 based counting for users
<rick_h> tvansteenburgh: but not being complete
<tvansteenburgh> rick_h: okay. i've got automation that agrees to terms. it relies on the terms returned by the api being accurate. i guess in the meantime i could maybe parse the charm pull error message for the terms i need to agree to
<rick_h> tvansteenburgh: it cmars confirms the logic I'd just check that the version the charm API says and if that's 0 add one. The charm should be updated to say /1
<rick_h> tvansteenburgh: and that should be reliable
<cmars> just back from lunch
<tvansteenburgh> rick_h: oh, i see. cool, that would be easier.
<cmars> tvansteenburgh, term revisions start with 1
<tvansteenburgh> https://api.jujucharms.com/v5/~ibmcharmers/ibm-websphere-liberty-5/meta/any?include=revision-info&include=promulgated&include=id-name&include=owner&include=terms
<rick_h> cmars: right, but if I charm terms -u ibmcharmes I get back a /0 for ibm-wlp/0
<rick_h> cmars: so the charm is set to that and that doesn't work as you note above
<rick_h> sorry, -u ibmcharmers
<cmars> rick_h, just because the charm metadata has a term revision 0, doesn't mean that a term revision 0 exists
<cmars> it's an invalid term ID
<tvansteenburgh> ok so that is just parsed out of the charm's metadata.yaml?
<rick_h> cmars: ok, so that's from the user? I'd assumed that the terms api would auto handle incrementing the revision so that they can't change existing revisions/etc
<cmars> tvansteenburgh, correct
<cmars> rick_h, ^^
<rick_h> oic, I thought it was pulling form the terms api itself as to what terms are stored
<rick_h> cmars: is there a way to query the terms service directly?
<cmars> rick_h, yes
<cmars> rick_h, see https://github.com/juju/terms-client for example
<rick_h> cmars: ah ok, I was looking through juju/charm command and didn't see anything.
<tvansteenburgh> aha
<cmars> rick_h, we add terms-client to the charm snap
<cmars> plugins
<rick_h> cmars: sorry, I'm missing something. So as a diff snap? I've got the charm snap but not finding any show-term and the like. Is that a recent update?
<stormmore> lazyPower did you get a chance to talk to the team about elasticsearch/kibana?
<lazyPower> stormmore ah thanks for reminding me, i haven't.
<tvansteenburgh> cmars: my use case is, "for a given charm url, show me a list of the terms i need to agree to"
<stormmore> lazyPower no problem, hence the reminder ;-)
<tvansteenburgh> rick_h: it's in the latest snap from --candidate
<rick_h> tvansteenburgh: ah, I'm on stable
<lazyPower> stormmore - let me table this for tomorrow and at bare minimum i'll run a bundle generation and kick a deploy before i EOD today to ensure it still turns up correctly
<rick_h> tvansteenburgh: k, I feel less out of it then ty
<lazyPower> stormmore - if it works as expected in its current form i'll send you over the bundle in my namespace, and we can pilot from there
<stormmore> lazyPower no problem, my dev teams are being slow anyway :)
<lazyPower> stormmore - i know for a fact bdx wanted this integration in teh past, and i do believe that hasn't changed
<cmars> tvansteenburgh, so for that, you'd use this API call: https://github.com/juju/terms-client/blob/master/api/api.go#L315
<cmars> it's macaroon authenticated, because the request is made for a logged in user
<tvansteenburgh> ok
<stormmore> in the meantime I am going to look at deploying Nexus 3 into the cluster for our private registry
<cmars> tvansteenburgh, you'd build a list of term IDs from charm metadata, then add them to a CheckAgreementsRequest (https://github.com/juju/terms-client/blob/master/api/wireformat/entities.go#L171) and call GetUnsignedTerms with that
<tvansteenburgh> cmars, rick_h: i think i have what i need now, thanks for your help!
<cmars> ok, great
<cmars> can someone review my merge proposal into charmhelpers? https://code.launchpad.net/~cmars/charm-helpers/add-metricenv/+merge/315952
<marcoceppi> cmars: any reason this isn't in hookenv?
<cmars> marcoceppi, it's a different hook execution environment
<marcoceppi> so are actions, but they're in hookenv
<marcoceppi> different or not, it's still a hook environment?
<cmars> marcoceppi, you can't use metricenv stuff from normal hooks at all. and vice-versa
<marcoceppi> cmars: likewise with relations (to an extent) and actions
<cmars> marcoceppi, i thought separating them would make this distinction clearer to the API user. "these aren't available for hooks generally -- these are special"
<marcoceppi> cmars: it's better ot have the commands check if htey are in the right hook context and raise exceptions when not, but not even the actions do this
<marcoceppi> I don't really see how this warrents a departure from existing hookenv.py
<marcoceppi> that said, hookenv and charmhelpers in general needs to be reitred for something better. but that's a longer story
<lazyPower> our world is going to catch on fire when we do that
<cmars> marcoceppi, i can concatenate it as well. hookenv is only 1037 LOC, there's room ;)
<lazyPower> you're referring to gutting plumbing from 99.9% of all charms
<marcoceppi> lazyPower: gutting, improving, tomato, tomato
 * mskalka shudders
<lazyPower> ^
<marcoceppi> I have an elaborate plan for this
<lazyPower> i have elaborate arguments for you every step of the way <3
<lazyPower> but trolling aside, whats your plan marcoceppi?
<marcoceppi> lazyPower: we've talked abou tthis before, years in the making
<marcoceppi> charmhelpers is bloatware, I'd like to pull the things out of core and make them feel more like how juju presents it's tools
<lazyPower> yeah
<lazyPower> thats easily a 6 month project if not a year in terms of deprecation and cleanup effort
<lazyPower> a lot of older charms are gonna get bit by that and die off slowly
<lazyPower> which i'm OK with
<lazyPower> if you're not maintaining it, let it die (i'm deeply seated in this camp of having unmaintained charms)
<cmars> marcoceppi, i can move the code if necessary to get that landed. anything else need to change for that MP?
<lazyPower> ls
<marcoceppi> cmars: you make a lot of assumptions that add-metric exists on disk
<cmars> marcoceppi, no more than open-port
<marcoceppi> cmars: open-port has been in juju since 0.2
<marcoceppi> cmars: I recommend taking a look at status-set and network-get on how they implement newer features without dealing with tracebacks
<magicaltrout> if people didn't have elaborate plans they wouldn't work at Canonical.....
<stokachu> world domination
<mskalka> marcoceppi: while you're here, can I pick your brain for a minute?
<marcoceppi> mskalka: go for it
<mskalka> marcoceppi: just want to know what you've tried in the past to get the replset thing moving, then run what I have in mind past you
<mskalka> marcoceppi: just to be sure I'm not barking up the wrong trees
<marcoceppi> mskalka: so, I've never really tried to code it, I'll be honest I've never gotten it to work manually
<marcoceppi> mskalka: my plan was to do this: is-leader? does leader-settings say I've bootstrapped this rs? no - init rs, yes ignore
<marcoceppi> on each new peer addition, each unit checks to see if it's the RS leader (not the juju leader) and if it is, adds the peer
<mskalka> marcoceppi: that's what I had in mind as well, without the 'is rs init'd', just have a @only_once on leader elected to spin it up
<mskalka> marcoceppi: then again a sanity check is probably a good idea. Then just fill in the rest for broken/departedl(kick off new election if leader, else remove)
<marcoceppi> mskalka: yeah, I wouldn't always trusty @only_once we @when('leader.elected') can just see if leader_get('rs.init') is true (and even verify by probing mongo)
<marcoceppi> mskalka: there's a weird, potential race contention that could arrise, where juju re-elects a leader to a unit which has just done install but not get config / relations and so it'd run @only_once and is_leader, init a new RS and you've got split brain
<marcoceppi> by checking (and setting) leadership settings you can persist that data between elections
<mskalka> marcoceppi: I thought about that, I don't have enough juju experience yet to know if that would be an issue haha
<mskalka> macroceppi: alright it seems like I'm headed in the right direction then. I'll see if I can finish this up today or tomorrow, time allowing
<marcoceppi> mskalka: \o/
<mskalka> o7
<cmars> marcoceppi, updated, please take another look at https://code.launchpad.net/~cmars/charm-helpers/add-metricenv/+merge/315952 ?
<stormmore> so if I need to add an ingress entry point, do I modify the ingress controllers that are already in my cluster or should I be creating a new one?
<lazyPower> stormmore - you just add an ingress object
<lazyPower> the rest should be handled transparently
<lazyPower> stormmore - something like https://bitbucket.org/chuckbutler/awesome-potato/src/f820bfc106fbbc00ca6045e07e27ac1f86b8b4f5/deploy/development.yaml?at=juju-demo&fileviewer=file-view-default#development.yaml-101:115
<stormmore> cool thanks, that is what I thought. just a little tired after doing a few 18+ hour days lately
<lazyPower> i hear ya
<lazyPower> there are cases where you will want to scale your ingress controller, but most of those reasons have vanished in the 1.5.x release of k8s as their ingress api uses teh same api-contoller pod for all namespaces so long as you scope your ingress objects with a namespace
<stormmore> is it better to use kind: Deployment vs kind: ReplicationController?
<lazyPower> before you had to deploy an ingress controller for every namespace, and it was tedius and resource intensive to run all those nginx pods
<lazyPower> yeah, you can use either, but deployments are favored as you can do rolling updates with them
<lazyPower> in blue/green style deployments
<stormmore> I am still getting my head around all the options in the YAML files
<lazyPower> deployments create rc's which create pods
<lazyPower> so you can run a --rolling-update on a deployment, and it will upgrade the RC, and slowly phase out the pods under the old RC until it can successfully delete all of them
<lazyPower> if your rolling update fails, you can reasonably revert back to the existing RC
<stormmore> nice so apparently a lot of the videos I have been watching by hightower is outdated :)
<lazyPower> kubes moves so fast though
<lazyPower> its hard not to be outdated
<stormmore> apparently!
<lazyPower> Deployments are also still a beta resource
<lazyPower> so thats possibly why its not promoted in the training material
<lazyPower> like we tend to shy away from anything thats not listed as stable in the API because its subject to change. beta's dont usually get changes, but the time we decide to do that it'll break and we'll have to change an implementation detail
<lazyPower> and nobody wants that
<stormmore> how would recommend handling different paths? i.e. I really don't care about all the nodes responding to all the paths and it looks like I can create a single Ingress for the "host" and use paths to point to the right service
<stormmore> we were talking about creating api.domain.com/v1/<service> as our structure
<lazyPower> thats useful when you want to map a microservice into your url structure like foo.com/api  would route to your bacakend golang web-api impl
<lazyPower> and / routes to your expressjs frontend
<lazyPower> i dont use that particular format often, i tend to deploy with subdomains more often than i url mux
<lazyPower> but it does work, and works reasonably well might i add
<stormmore> lol :) yup sounds like I am on the right track for my thoughts
<stormmore> so it appears that the Ingress now uses the service lb, do you know if there is going to be other options for the LB than round robin? for instance least conn?
<lazyPower> i dont off hand
<lazyPower> i would need to go dig around in the issue tracker
<lazyPower> i'm fairly certain there's a lot of talk around this, a lot of users are going the route of cloud-provider LB's, but that gets expensive quickly. We're talking around making some supporting charms to enable that class of infrastructure but nothing concrete yet
<lazyPower> https://github.com/juju-solutions/bundle-canonical-kubernetes/issues/183 -- as an example
<stormmore> personally I don't care if it in or out of cluster other than the fact that out of cluster comes at a cost for bare metal clusters but even between services in the cluster it would be nice to be able to better balance connections based on load or other metric
<stormmore> for now, I am pushing my dev teams to make their stuff fully stateless to account for the roundrobin service LB\
<lazyPower> hmm
<lazyPower> stormmore - remind me another time to revisit the haproxy lb approach.
<lazyPower> i'm fairly certain we can tune this behavior in the nginx/haproxy flavors of an ingress controller
<lazyPower> i'm totally open to trying to patch this with a configmap so we can further tune the ingress behavior
<lazyPower> we have such a patch already submitted that needs additional vetting to enable running a registry in k8s
<lazyPower> stormmore - you might want to tag this issue and track it - https://github.com/juju-solutions/kubernetes/pull/100
<lazyPower> s/issue/pr/
<stormmore> cool thanks
#juju 2017-02-01
<magicaltrout> lazyPower: you wrapped ansible playbooks in charms didn't you in the demo in Ghent last year?
<lazyPower> magicaltrout - it was kind of hackey, and there was no follow up interest
<lazyPower> but i did yeah
<magicaltrout> cool
<magicaltrout> cause i have this 0.5PB server that I mentioned the other week
<magicaltrout> can't install anything on it but does have python
<magicaltrout> so i'm proposing I create some playbooks (never used ansible before, should be a treat) to deploy our code, then wrap the playbooks in charms to deploy the same stuff outside of the mega server
<lazyPower> interesting
<lazyPower> magicaltrout - https://github.com/chuckbutler/ansible-base early work was left here
<lazyPower> literally nothing more than POC work
<magicaltrout> thanks lazyPower thats veryu handy
<magicaltrout> I work for NASA, everything we do is POC ;)
<lazyPower> :)
<lazyPower> there's probably something heinous in there
<lazyPower> feel free to flame me later
<magicaltrout> i'll just pour beer on you instead.... by "accident"
<lazyPower> ooo please dont, i'll be lean on clean clothing
<lazyPower> headed to PHX on Thursday before i head to Ghent, so its literally like, last day of the conference i might be recycling socks
<magicaltrout> hehe
<magicaltrout> i'll bring pegs to block the smell
<bdx> is it possible to transfer model ownership?
<lazyPower> bdx i'm not certain, i would think its possible via juju grant/revoke && juju share
<bdx> lazyPower: `juju share` - ha dreaming
<lazyPower> ok so that command changed on me
<lazyPower> i make no apologies for progress happening :P
<lazyPower> i'm gonna dip out and go get an extremely late dinner
<lazyPower> bbiaf
<rick_h> bdx: so if you make another account the model admin then it's just the same
<rick_h> bdx: should be able to remove yourself and the other person now has the admin rights
<bdx> rick_h: ahh niceeeee! thanks!
<lazyPower> rick_h - i was totally there, but dropped the ball that juju-share is no longer a thing (derp)
<lazyPower> byproduct of working late, sorry bout that one
<kjackal> Good morning Juju world!
<surf> anrah are you there
<aisrael> Good morning!
<Zic> lazyPower: (don't freak out, I don't have new problems :D) -> I saw that all our VMs (which hosting the control plane of k8s (master, etcd, kube-apilb, easyrsa)) are prompting some I/O error in intensive use (like, many cluster operation at once, like the deletion of large namespaces)
<Zic> lazyPower: the problem is clearly identified in our side, it's the storage of the ESXi which slow down
<Zic> lazyPower: so, just to let you know that's not the fault of CDK :]
<Zic> I will pursue my environment test on this cluster, but I think I will rebootstrapping from scratch before go in production
<Zic> (when the I/O error problem at my storage will be fixed)
<marcoceppi> Zic: that's good insight, we're trying to build out a wiki of caveats on different providers, an ESXi gotchyas would be good to have
<Zic> all our virtualization is based on Proxmox clusters or VMware ESXi/vCenter clusters in my office
<Zic> I tend to prefer Proxmox for opensource but for this customer which will use CDK, it's an ESXi :/
<Zic> but as I said, I think the problem is located on the ESXi storage (which is not local, it's an iSCSI attached disk-arrays machine)
<Zic> maybe on the network side, or maybe on the disk-arrays itself
<marcoceppi> Zic: seems plausable
<jcastro> Zic: there's a sig-onprem, which meets today actually, that is collecting onpremise tips, tricks, issues, bugs, etc. that we're a part of
<jcastro> so if you want to pass any feedback along upstream ...
<lazyPower> Zic - ah, interesting. I just had a volume crash in my NAS that was backing my home lab. i'm in the process of copying the data to another volume now and prepping for a re-deployment of my remote storage. Seems like i've found a similar situation with iscsi backed volumes.
<lazyPower> weird how it struck us both at the same time.
<lazyPower> greanted you said io issues not a crash, but i digress
<rick_h> lazyPower: you know anyone I can ping on this? https://github.com/juju/charm-tools/issues/287
<lazyPower> rick_h - marcoceppi is the grand poobah of that charm
<rick_h> lazyPower: ok, wasn't sure if he'd delegated these days ty
<lazyPower> s/charm/snap/
<lazyPower> i need coffee
<rick_h> marcoceppi: is there any way around https://github.com/juju/charm-tools/issues/287 ? I'm trying to create my first layered charm wheeee
<rick_h> lazyPower: does charm create work for you? If you could, could you create a shell and shoot me a tarball as a personal favor pretty please?
<lazyPower> rick_h https://www.dropbox.com/s/ey8bi262mqcys12/for_rick.tar?dl=0
<rick_h> my hero!
 * lazyPower flex's
<magicaltrout> don't let rick_h write charms! .....
<rick_h> but but but .... I promise to only do somewhat good things
<lazyPower> why not? he worked on the gui charm back in the day
<magicaltrout> hehe
<lazyPower> OOOhhhh i should have looked at who was trolling before i fed the argument ;)
<magicaltrout> morning :P
<lazyPower> \o magicaltrout
<magicaltrout> spent the morning writing some cfgmgmtcamp slides
<magicaltrout> figured i'll come and annoy you all now
<lazyPower> Thats on my todo list today as well
<magicaltrout> before the californians wake up and annoy me
<Zic> jcastro: this kind of meeting is planned regularly?
<jcastro> yep
<Zic> jcastro: I can prepare something for the next one, because for today I don't have enough time :/
<magicaltrout> started work on yet another charm yesterday
<magicaltrout> openlda[
<jcastro> there's a sig-cluster-ops as well, always looking for feedback
<magicaltrout> p
<jcastro> Zic: I was just making you aware it exists
<Zic> jcastro: what is the date of the next one?
<jcastro> https://github.com/kubernetes/kubernetes/wiki/Special-Interest-Groups-IGs)
<lazyPower> magicaltrout - so i see you're embracing the big tree of hate known as ldap
<magicaltrout> lazyPower: well, er
<magicaltrout> yeah
<jcastro> 15 Feb is the next one
<lazyPower> :D
<magicaltrout> I don't see why not, i use ldap servers all the time and their a ballache
<lazyPower> the last time i interfaced with ldap with any seriousness i recall being frustrated
<lazyPower> yeah
<magicaltrout> why not let it be easier with relations?
<lazyPower> +1 to that sentiment
<jcastro> basically you put what you want to talk about on the google document, they have mailing lists for each sig.
<lazyPower> i think we talked about this in pasadena
<magicaltrout> yeah
<magicaltrout> I have a blank implementation for interfacing with AD and stuff kicking around as well
<magicaltrout> I just need to tidy it up and ship it
<lazyPower> interesting, did you use the cloudbase AD as proof point?
<magicaltrout> nope, i used NASA's AD as a proof point ;)
<lazyPower> well i mean, we have an AD charm that CBS wrote
<lazyPower> last time i deployed it was back in 2015 however.
<lazyPower> i presume your AD "adapter" was a light weight forwarder then? a proxy-charm as it were.
<magicaltrout> yeah i know but i'm not overly bothered by its guts
<magicaltrout> there was no public interface
<magicaltrout> so i just created one with the configuration stuff i needed
 * lazyPower nods
<marcoceppi> rick_h: snap install --classic --candidate
<lazyPower> shame it wasn't more useful ini ts current form. but eyyyyyyy
<rick_h> marcoceppi: did
<rick_h> marcoceppi: or did you just update in the last 15min?
<marcoceppi> I  haven't updated it, bu tit should be working
<marcoceppi> rick_h: are you still blocked?
<rick_h> marcoceppi: I got a tarball of the create from lazypower and I'm tweaking it and will see if I can run charm build or if I'll hit the same dep issue
<rick_h> marcoceppi: k, verified I can run charm build just not create
<marcoceppi> rick_h: odd, I'll take a look
<marcoceppi> rick_h: whoops, I see that now. I'll get a fixin
<rick_h> marcoceppi: <3 ty
<Zic> jcastro: I will try to come at the next one so :) I don't notice, is it on Slack, IRC?
<jcastro> it's on slack too
<Zic> which one? (the link you gave me redirect to the wiki homepage, maybe because I'm unsigned currently from GitHub)
<lazyPower> rick_h - any movement on compiling juju in a newer version of go in brew?
<lazyPower> rick_h - these stack traces are a drag :(
<rick_h> lazyPower: I have no idea tbh.
<rick_h> lazyPower: would have to check with balloons or sinzui on the plans there I think
<Zic> hmm, I really need to take time to learn Go someday if I want to contribute (and also because it's a more and more widely used language)
<Zic> my old Bash/C/Python skills is not up-to-date with 2017 I guess :p
<magicaltrout> if i update an interface and push it to github
<magicaltrout> do i need to do anything next time a build a charm relying on it?
 * magicaltrout has messed up somewhere
<marcoceppi> magicaltrout: nope, just build away
<magicaltrout> charmtools.build.tactics: Missing implementation for interface role: requires.py
<magicaltrout> what have i messed up then?
<magicaltrout> i renamed the underlying bits to fit the general interface naming
<magicaltrout> and now my charm doesn't build :)
<mskalka> sounds like one of the layers you're using isn't in the right spot
<magicaltrout> well
<magicaltrout> http://interfaces.juju.solutions/interface/solr/ interface is there
<mskalka> sorry I mean locally
<magicaltrout> and its referenced in layers.yaml and metadata.yaml
<magicaltrout> but the build directory charm build lists is empty when it falls over
<mskalka> hmm
<magicaltrout> this is my first attempt at a public interface so I've clearly messed up somewhere
<magicaltrout> normally my interfaces just live in $INTERFACES
<mskalka> you're sure charm build pulled the interface down into $JUJU_REPOSITORY/interfaces?
<magicaltrout> no it didn't but I don't believe charm build does that now(any more?). They just appear in hooks/relations/ in your build dir from somewhere
<magicaltrout> if charm build pulled them all down I'd have loads in JUJU_REPO/intefaces
<mskalka> I'm not 100% on the build behavior but I ran into the same issue yesterday building a local charm and the fix was dropping a local copy of whatever missing interface I had into that INTERFACE_PATH dir
<lazyPower> it uses a temporary directory in your build path 'deps'
<lazyPower> you'll find what it pulled there
<marcoceppi> magicaltrout: are you using 2.2.0 charm-tools?
<magicaltrout> 2.1.9
<marcoceppi> magicaltrout: are you on ubuntu?
<magicaltrout> of course
<marcoceppi> magicaltrout: sudo apt purge charm charm-tools; sudo apt update; sudo apt install snapd; sudo snap install charm --candidate --classic
<marcoceppi> magicaltrout: the snap has 2.2.0 in it, which is much better at telling you whats happening during the build process
<magicaltrout> excellent
<magicaltrout> vague logging is my own forte
<magicaltrout> trolol same error
<magicaltrout> marcoceppi: does the build stage somewhere?
<magicaltrout> cause the target dir is emptyu
<marcoceppi> magicaltrout: can you run build with --debug and post the output?
<magicaltrout> http://pastebin.com/RSFRvTHR
<magicaltrout> nothing special
<magicaltrout> i actually put my interface back in $INTERFACE_PATH and I still get the error
<magicaltrout> so i've no idea what i broken in renaming it
<magicaltrout> although i do have a recollection of me naming my interface solr-interface initially because of this problem
<admcleod_> magicaltrout: hi! what do your layers and metadata yamls look like?
<magicaltrout> hello admcleod_ pretty standard
<magicaltrout> https://github.com/USCDataScience/sparkler/blob/master/sparkler-deployment/juju/sparkler/metadata.yaml
<magicaltrout> except solr-interface is now just solr
<magicaltrout> the errors are weird though from charm tools, like its building a half cached version
<admcleod_> magicaltrout: hmm. i think you broke it.
<admcleod_> magicaltrout: did you check what lazyPower said? deps?
<Zic> lazyPower: can I let kubernetes-e2e in a production cluster or it's not recommended?
<magicaltrout> yeah but then i just reverted to trying a local version
<magicaltrout> and thats screwed as well
<lazyPower> Zic - you can certainly runi t against a prod cluster. its a great validation mechanism and it cleans up after itself
<Zic> "and it cleans up after itself" was what I wish to know, thanks :)
<Zic> in a near-default CDK, should I get any error?
<Zic> (I didn't try yet)
<magicaltrout> yeah admcleod_ I went back to calling a local version solr-interface and it returns to building find
<magicaltrout> fine
<magicaltrout> but if i call it just `solr` it freaks out
<admcleod_> build: Processing interface: solr
<admcleod_> ...
<admcleod_> worked
<magicaltrout> ah
<magicaltrout> found it
<magicaltrout> wtf
<magicaltrout> i think this goes down as a weird charm build bug
<magicaltrout> plus my wonky setup
<magicaltrout> I have ~/Projects/charms
<magicaltrout> and ~/Projects/charms/interfaces
<magicaltrout> in charms I had an empty directory called `solr`
<lazyPower> RIP
<lazyPower> magicaltrout - if its in the interface archive, try biulding with --no-local-layers
<magicaltrout> maybe its not a bug, maybe it searches various places for interfaces
<lazyPower> it does, and it will use local paths if it finds them
<lazyPower> the --no-local-layers ensures you're always fetching from the api always
<magicaltrout> I thought it just looked in $INTERFACE_PATH?
<magicaltrout> oh well
<magicaltrout> weirdness averted
 * magicaltrout must remember not to put stuff in $JUJU_REPOSITORY that might share the name with a layer
<magicaltrout> s/layer/interface
<admcleod_> oh you
<magicaltrout> i think its a fair assumption it would look for interfaces on the interface_path! :P
<admcleod> well, you know what they say
<magicaltrout> the balder you are the more shiny your scalp?
<admcleod> actually that depends on buffing
<admcleod> but no i meant the other thing
<magicaltrout> don't assume things they're generally wrong?
<admcleod> thatll do :}
<cory_fu> bdx: Mind weighing in on https://github.com/juju-solutions/layer-basic/pull/86
<magicaltrout> cory_fu: just for reference as you guys use GH you could setup a CLA exactly like we do with the ASF: https://cla.github.com/
<magicaltrout> so that people contributing get some terms about copyright ownership and canonical's rights etc
<magicaltrout> so that if you do a license pivot whilst its nice to have asked, you can do what you like ;)
<magicaltrout> not that I think bdx will care especially, but ya know...
<lazyPower> this statement is true, you never know about bdx ;) ;)
<magicaltrout> depends what drugs he's under the influence of at that given point in time! ;)
<lazyPower> :O
<icey> has anybody tried mixing bash + python in a reactive, layered charm?
<mskalka> marcoceppi: I can see why you ran into issues even with manual replset initiation. Mongo is SUPER picky about its input
<marcoceppi> mskalka: it totally is.
<mskalka> marcoceppi: I spent 20 minutes trying to figure out why it thought my obvious string '10.X.X.X:Y' was being interpreted as an int. Needed double quotes.
 * mskalka bangs head on desk
<bdx> magicaltrout: put some heat on this for me and I'll let that one go https://bugs.launchpad.net/juju/+bug/1660675
<mup> Bug #1660675: Feature Request: instance tagging via Juju <juju:Triaged> <https://launchpad.net/bugs/1660675>
<rick_h> 10min warning to Juju Show Ep #5
<rick_h> wooooooooo
<rick_h> Juju Show watch link: https://www.youtube.com/watch?v=NySW5VjBDC8
<rick_h> Juju Show "sit on the panel" link: https://hangouts.google.com/hangouts/_/ytl/YAV4cq1d16ZlbovNrxLocKaJBoURiJ8c2KYWnDY-64E=?eid=103184405956510785630&hl=en_US&authuser=0
<rick_h> arosales: marcoceppi jcastro lazyPower bdx magicaltrout mbruzek ^
<marcoceppi> rick_h: have fun, I'll watch later
<mbruzek> thanks Rick
<rick_h> kwmonroe: ^
<kwmonroe> thx rick_h, petevg ^^
<petevg> thx, kwmonroe. Heading over ...
<mbruzek> rick_h: I got a 403 with that url
<rick_h> mbruzek: try a different authuser at the end?
<rick_h> mbruzek: or take that off?
<arosales> still able to jon?
<mbruzek> will do
<arosales> https://hangouts.google.com/hangouts/_/ytl/YAV4cq1d16ZlbovNrxLocKaJBoURiJ8c2KYWnDY-64E=?eid=103184405956510785630&hl=en_US&authuser=0
<arosales> 404 for me
 * rick_h tries w/o the authuser
<rick_h> https://hangouts.google.com/hangouts/_/ytl/YAV4cq1d16ZlbovNrxLocKaJBoURiJ8c2KYWnDY-64E=?eid=103184405956510785630&hl=en_US
<rick_h> we've got 3 other folks in atm
<arosales> rick_h: are you able to invite?
<mbruzek> arosales: delete everything after the equal sign
<rick_h> arosales: invited
<arosales> 404 all around
<kwmonroe> arosales: if you took authuser=0 out, try putting it back with =1, https://hangouts.google.com/hangouts/_/ytl/YAV4cq1d16ZlbovNrxLocKaJBoURiJ8c2KYWnDY-64E=?eid=103184405956510785630&hl=en_US&authuser=1
<arosales> no luck there either
<mbruzek> Try this: https://hangouts.google.com/hangouts/_/ytl/YAV4cq1d16ZlbovNrxLocKaJBoURiJ8c2KYWnDY-64E=
<mbruzek> arosales ^
<arosales> that worked, thanks mbruzek
<kwmonroe> nice!
<mbruzek> arosales: owes mbruzek a brewski
<arosales> mbruzek: you got it
<kwmonroe> ~charmers group info, for those interested:  https://jujucharms.com/community/charmers
<mbruzek> kvm!
<mbruzek> awesome
<stormmore> howdy juju world! :)
<arosales> stormmore: hello
<stormmore> lazyPower - just your daily friendly reminder ;-)
<bdx> rick_h: are you guys watching deltas too, on the hosted controller? statistics for usage and events per user, per model?
<bdx> not sure if you planned to touch on the hosted controller ...
<Merlijn_S> to create a charm that drives other charms
<Merlijn_S> the stacks idea
<arosales> hello Merlijn_S
<arosales> indeed, we have seen that use case come up time and time again
<stormmore> isn't that what bundles do?
<arosales> stormmore: a bundle is static, or just a yaml description for a given solution
<arosales> stormmore: what folks have been talking about is if you wanted to have a auto-scaler charm it would need extra privledges to add-unit on another charm
<arosales> today a charm can't take juju admin tasks on another charm
<arosales> so that is the though here stormmore
<stormmore> arosales: that would be indeed be cool. I want to eventually be able to spin up and spin down nodes based on load in my bare metal environment using MaaS
<Merlijn_S> PS: for show and tell; I'm working on a Charm for the Eclipse Che cloud editor + Charming integration. Very rough early work, but if anyone is interested: https://jujucharms.com/u/tengu-team/eclipse-che/
<Merlijn_S> Basically an IDE running in your browser that connects to a charmbox with all the juju tools preinstalled
<arosales> stormmore: indeed and some folks have been thinking about that with libjuju and juju-2.0 so stay tuned to the list for that work
<stormmore> arosales: awesome, another "getting ahead of myself" situation :)
<arosales> good your thinking in that direction
<rick_h> Merlijn_S: oooh shiny
<arosales> Merlijn_S: very interesting, taking a look now
<stormmore> yes and it is awesome that I am not the only one. reason #143 of why I choose MaaS and Juju for this environment ;)
<arosales> stormmore: :-)
<arosales> Merlijn_S: perhaps we should show this in the next juju show if you are up or it
<arosales> Merlijn_S: rick_h was thinking of doing a couple of juju shows at the summit next week. At a min to recap
<Merlijn_S> arosales: I'll probably do a lightning talk about it
<arosales> Merlijn_S: +1
<arosales> Merlijn_S: we will also be recording talks
<Merlijn_S> arosalesL +1 :)
<jcastro> ok so we don't have slots for lightning talks
<jcastro> so I think we should perhaps start consolidating talks
<jcastro> or asking people if they need the full 40 minutes
<arosales> jcastro: ya we should look to make some room
<arosales> jcastro: or shorten talks, +1
<jcastro> we could also ask matt/chuck to bin the kubernetes talks and propose those as lightning talks in the kubes track?
<arosales> jcastro: I think we could shorten the talks each by 5-10 min each day to at least leave 30 min at the end of the day
<stormmore> talks? where do these happen?
<arosales> thats 6 lightning talks across the 2 days, each at 10 min
<arosales> stormmore: summit.juju.solutions
<arosales> Gent, Belgium next week
<jcastro> yeah the problem is we can't really change the timeslots, we inherit those from cfgmgmntcamp
<jcastro> so like, snacks and drinks and breaks are all on that schedule
<arosales> ah
<jcastro> I mean, we could fit 2 in one
<jcastro> but customizing the schedule is out
<jcastro> the slots I mean
<arosales> gotcha
<arosales> jcastro: so then our only option is to consolidate
<arosales> jcastro: what time do we end on Tuesday?
<jcastro> http://cfgmgmtcamp.eu/schedule/index.html#juju
<arosales> lolz
<arosales> yes I am looking at that
<jcastro> oh, well talks are 40 minutes
<jcastro> so 16:20, bus at 16:30~1700
<arosales> on monday james talk i at 17:00
<arosales> but on tuesday last talk is 15:40
<stormmore>  you may want to update the channel topic, summit.jujucharms.com is failing dns right now
<arosales> kwmonroe: perhaps post call mbruzek would like to learn more about resources and cwr-ci
<jcastro> marcoceppi: ^^
<jcastro> I think it's safe to just link to the direct schedule good call
* jcastro changed the topic of #juju to: Join us at the Charmer Summit: 6-7 Feb - http://cfgmgmtcamp.eu/schedule/index.html#juju || https://review.jujucharms.com/ || https://jujucharms.com/docs/
* jcastro changed the topic of #juju to: Join us at the Charmer Summit: 6-7 Feb - http://cfgmgmtcamp.eu/schedule/index.html#juju || https://review.jujucharms.com/ || https://jujucharms.com/docs/ || http://goo.gl/MsNu4I || Youtube: https://www.youtube.com/c/jujucharms
<kwmonroe> http://summit.juju.solutions/ works for me
<jcastro> sorry for the spam
<jcastro> it's been glitchy all month
<rick_h> ty arosales mbruzek kwmonroe bdx and pete not tim!
<kwmonroe> :)
<arosales> thanks for hosting rick_h !
<rick_h> if anyone has anything else for the notes please fill it in like kwmonroe is doing and I'll copy/pretty up for the youtube desc
<jcastro> arosales: ok so who are we combining?
<jcastro> we should do this now because I have to start packing soon, I have a pre-Gent trip to cram in before summit-ing.
<arosales> jcastro: hangout?
<jcastro> omw
<marcoceppi> stormmore: jcastro it's summit.juju.solutions.......
* marcoceppi changed the topic of #juju to: Join us at the Charmer Summit: 6-7 Feb - http://summit.juju.solutions || https://review.jujucharms.com/ || https://jujucharms.com/docs/ || http://goo.gl/MsNu4I || Youtube: https://www.youtube.com/c/jujucharms
<stormmore> @marcoceppi yes I was aware of that just the link in the topic was wrong ;-)
<jcastro> someone was complaining that summit.juju.solutions was the busted one
<jcastro> last week
<jcastro> that doesn't excuse the wrong url in the topic though. /me runs
<marcoceppi> stormmore: <3
<marcoceppi> jcastro: yeah, that link has never been broken, we should just get summit.jujucharms.com pointed as well
<stormmore> 301 redirect time marcoceppi!
<rick_h> ok, video updated. /me runs to get the boy from school
<rick_h> kwmonroe: let's chat later on blog/email follow up please
<rick_h> kwmonroe: thanks so much for presenting and putting that together!
<kwmonroe> np rick_h - thanks for the airtime!
<Guest21821> I used to have my charms working in trusty
<Guest21821> I recently moved to xenial
<Guest21821> I find that my charms are failing in the install hook and the log has the following errors
<Guest21821> 2017-02-01 19:37:41 INFO juju.worker.meterstatus connected.go:112 skipped "meter-status-changed" hook (missing) 2017-02-01 19:38:10 INFO juju.worker.leadership tracker.go:184 contrail-control/0 will renew contrail-control leadership at 2017-02-01 19:38:40.054544153 + 0000 UTC2017-02-01 19:38:22 INFO install /usr/bin/env: 'python': No such file or directory 2017-02-01 19:38:22 ERROR juju.worker.uniter.operation runhook.go:107 hook "i
<Guest21821> The /usr/bin/env directory is indeed there and if manually install the packages it is working
<Guest21821> Can you please let me know, why I am seeing this error?
<Guest21821> Any help is much appreciated
<stormmore> wonder if you are getting tripped up with a Windows CRLF problem
<stormmore> or it could be a related bug to https://bugs.launchpad.net/charms/+source/odl-controller/+bug/1555422
<mup> Bug #1555422: On Xenial: install /usr/bin/env: 'python': No such file or directory <uosci> <odl-controller (Juju Charms Collection):Fix Committed by james-page> <https://launchpad.net/bugs/1555422>
<Guest21821> @mupand @stormmore, can you ls let me know, how I can put the patch for this fix
<Guest21821> I am using Juju2.0
<stormmore> that I can't do, sorry :-/
<bdx> question concerning legacy hooks in reactive charms, should this work http://paste.ubuntu.com/23907069/ ?
<tvansteenburgh> Guest21821: which charm is it?
<bdx> oops, this http://paste.ubuntu.com/23907074/
<Guest21821> I am using the contrail charms that i am developing
<bdx> my 'upgrade-charm' hook just doesn't seem to be firing ... I'm wondering if there is something else I need to add ...
<bdx> oops
<stormmore> bdx should the @hook not be @hooks?
<Guest21821> @mup, @bdx, if I install the python2 package, will it resolve the issue?
<tvansteenburgh> Guest21821: your charm needs to either install python2, or use python3 instead
<bdx> stormmore: whoops ... yeah .. that might be my bad (typo) .. thx
<Guest21821> @tvansteenburgh, thanks. How do I install python2 from the charm
<Guest21821> what will be the package name?
<stormmore> no worries bdx, just what I noticed from a quick glance
<tvansteenburgh> Guest21821: python
<Guest21821> @tvansteenbursh, just apt-get install python will do?
<tvansteenburgh> Guest21821: yeah, is it a bash charm?
<Guest21821> @tvansteenburgh, no it is a python charm
<tvansteenburgh> Guest21821: if it's a reactive charm you can put it in layer.yaml
<Guest21821> @tvansteenburgh, no it is a python charm. It is not a reactive charm
<Guest21821> I will just mention 'python' in the list of packages I have
<tvansteenburgh> Guest21821: those are not mutually exclusive
<tvansteenburgh> Guest21821: for example https://github.com/juju-solutions/review-queue-charm/blob/master/layer.yaml
<Guest21821> I meant it is not a bash charm but python charm
<tvansteenburgh> Guest21821: right, but the charm i linked above is also python, but it uses the reactive framework
<Guest21821> @tvansteenburgh, mine does not use reactive charm
<tvansteenburgh> Guest21821: ok
<Guest21821> Let me install the python package from the charm and see how it goes
<Guest21821> Thanks a lot
<tvansteenburgh> np
<lutostag> cory_fu: for the invite I get 'This invitation is invalid. ' when I try to accept for crashdump :/
<cory_fu> lutostag: Ah.  I was hoping that the admin invite would transfer when I moved it to https://github.com/juju/juju-crashdump but it didn't.
<cory_fu> marcoceppi: Since this move was at your behest, can you give lutostag and the big software team access?
<cory_fu> lutostag: I should ask, did you see the context for this move?
<marcoceppi> cory_fu: you all do have access
<marcoceppi> cory_fu: it could have stayed in juju-solutions, fwiw
<marcoceppi> cory_fu: check your perms now
<marcoceppi> cory_fu: you have admin
<cory_fu> marcoceppi: I thought all new mature projects are supposed to go juju?
<marcoceppi> cory_fu: true
<marcoceppi> we need to move a lot of things then ;)
<cory_fu> lutostag: For reference: https://github.com/juju/plugins/pull/75
<cory_fu> marcoceppi: Yeah, I thought that was the plan, as it made sense for each repo.
<marcoceppi> cory_fu: true, we should move charms.reactive and such
<cory_fu> marcoceppi: Yes, we should
<marcoceppi> cory_fu: lets chat at the summit
<lutostag> cory_fu: ah neato. I don't care where it lives, but somewhere on its own probably does make more sense
<marcoceppi> cory_fu: get a list and make a plan
<cory_fu> marcoceppi: I won't be at the summit, but tvansteenburgh, Merlijn, and tinwood will be there.
<marcoceppi> cory_fu: doh
<cory_fu> :/
<bdx> kwmonroe: so, I had typos in my pastebin, not my charm, I'm still not getting the 'upgrade-charm' hook to fire
<bdx> kwmonroe: http://paste.ubuntu.com/23907168/
<bdx> my log shows http://paste.ubuntu.com/23907184/
<cory_fu> bdx: You're mixing reactive and non-reactive.  http://pastebin.ubuntu.com/23907195/
<bdx> cory_fu: that would do it! thanks!
<cory_fu> np
<siva_guru> @tvansteenburgh, I installed the python package in my charms. I don't get the old error but it still fails in the install hook
<siva_guru> I get the following error
<siva_guru> 2017-02-01 20:49:13 ERROR juju.worker.dependency engine.go:539 "metric-collect" manifold worker returned unexpected error: failed to read charm from: /var/lib/juju/agents/unit-contrail-control-0/charm: stat /var/lib/juju/agents/unit-contrail-control-0/charm: no such file or directory 2017-02-01 20:49:13 INFO worker.uniter.jujuc tools.go:20 ensure jujuc symlinks in /var/lib/juju/tools/unit-contrail-control-0 2017-02-01 20:49:13 INFO w
<siva_guru> Sorry , I still see the same error
<siva_guru> 2017-02-01 20:49:43 INFO juju.worker.meterstatus connected.go:112 skipped "meter-status-changed" hook (missing) 2017-02-01 20:49:43 INFO install /usr/bin/env: 'python': No such file or directory 2017-02-01 20:49:43 ERROR juju.worker.uniter.operation runhook.go:107 hook "install" failed: exit status 127 2017-02-01 20:49:43 INFO juju.worker.uniter resolver.go:100 awaiting error resolution for "install" hook 2017-02-01 20:49:48 INFO juj
<tvansteenburgh> siva_guru: really hard to diagnose without seeing charm source code
<siva_guru> Let me paste the install hook for you
<siva_guru> PACKAGES = [ "python", "docker.io" ]
<siva_guru> @hooks.hook() def install():     apt_upgrade(fatal=True, dist=True)     apt_install(PACKAGES, fatal=True)     load_docker_image()
<siva_guru> http://paste.ubuntu.com/23907302/
<siva_guru> @tvansteenburgh, I find that the python package was not installed even though it is there in the list of packages
<tvansteenburgh> siva_guru: can you link to the repo or something? also maybe pastebin the entire juju debug-log
<cory_fu> petevg: I got this with the new juju-crashdump repo and the latest matrix code:
<cory_fu> matrix:216:execute_process: ERROR retrieving SSH host keys for "ubuntu/1": keys not found
<cory_fu> petevg: Shouldn't that be resolved?
<petevg> cory_fu: I believe that's okay. It probably means that ubuntu/1 had gone away.
<petevg> cory_fu: ... or that it hand't come up.
<cory_fu> petevg: Oh, wait.  There never should have been a /1 if I'm reading this right
<petevg> cory_fu: I got that error, threw things into a debugger, and confirmed that the ssh trick worked, and that glitch had just added the machine.
<petevg> cory_fu: glitch probably added the /1
<cory_fu> petevg: http://pastebin.ubuntu.com/23907345/
<cory_fu> petevg: You're right
<cory_fu> I missed the "add_unit" at the top due to glare on my monitor.  >_<
<petevg> cory_fu: cool. I think that it's worth continuing to watch, and I don't think that we should squelch those messages, but I'm 95% certain that the ssh fix is working, and that message is okay.
<cory_fu> petevg: Is there any way we can improve or skip the error message in the case that glitch added a unit and it's not up yet?
<cory_fu> Why do you think we shouldn't drop those messages (for that particular case)?
<petevg> cory_fu: if you can think of a way to squelch it that doesn't squelch actual errors, I'm all ears.
<cory_fu> Ah
<cory_fu> Yeah, I don't have any ideas.  :p
<petevg> cory_fu: yeah the error is being generated by juju-crashdump, and glitch is the thing that knows about the added machine.
<cory_fu> petevg: I do think that glitch probably shouldn't terminate until the units it added are up and healthy, otherwise we're not actually testing that add_unit works
<cory_fu> But maybe it can just do that at the end, instead of blocking before the next glitch step?
<petevg> cory_fu: true. Right now, the only way to wait is our health check, though, and that only works once per test.
<petevg> cory_fu: adding a more general "wait 'til everything is good" check makes sense to me, though. I'll make an issue and a ticket.
<cory_fu> petevg: Thanks
<petevg> np
<siva_guru> @tvansteenburgh, the code is in a private repo
<siva_guru> I can cut n paste the entire juju log
<siva_guru> will that help
<siva_guru> will that help?
<siva_guru> @tvansteenburgh, after I manually install it and do a juju resolved, it goes through
<siva_guru> Any idea why the charm is not installing it
<tvansteenburgh> siva_guru: log might help, yeah
<bdx> kwmonroe: what is the scoop on the pipeline you demoed using private interfaces/layers? e.g. my interfaces and layers are not on interfaces.juju.solutions
<tvansteenburgh> siva_guru: you're sure the unit has the new charm code?
<siva_guru> @tvansteenburgh, here is the log
<siva_guru> http://paste.ubuntu.com/23907417/
<kwmonroe> bdx: great question.  currently, the jenkins job will shell out to 'charm build' (https://github.com/juju-solutions/layer-cwr/blob/master/templates/BuildMyCharm/config.xml#L60).  we do not support adding flags to charm build, but we should.  i think you'd need the job to do 'charm build --interface-service=http://private.repo'.
<bdx> kwmonroe: I see, how do I make myself an 'interface-service'?
<kwmonroe> bdx: would you please open an issue requesting that charm build support private interface registreies?  https://github.com/juju-solutions/layer-cwr/issues
<bdx> yes
<bdx> kwmonroe: https://github.com/juju-solutions/layer-cwr/issues/49
<kwmonroe> thx bdx!  i'm looking for docs on making your own interface service, but am coming up empty.  cory_fu, do you recall what 'charm build --interface-service=foo'  requires for foo?
<kwmonroe> i think it might be as simple as running a python -m SimpleHTTPServer in your $INTERFACE_PATH somehwere
<cory_fu> kwmonroe, bdx: https://github.com/juju-solutions/juju-interface
<kwmonroe> ah, cool, thx cory_fu
<tvansteenburgh> siva_guru: does your install hook source file have a shebang line at the top?
<bdx> kwmonroe, cory_fu: looking through https://github.com/juju-solutions/juju-interface, a) this is great! b) I'm not seeing how/where I might add a private registry entry, possibly that functionality doesn't exist yet ..
<cory_fu> bdx: I don't think there's any support for private entries at the moment.  You'd have to run your own instance of that service and point to it with the --interface-service variable.
<bdx> I'm wondering if ^ will just give me a gui, similar to interfaces.juju.solutions that I can log into and add my private repos in the ui possibly?
<bdx> it looks like that is the site interfaces.juju.solutions
<cory_fu> bdx: Yes, that is the application that runs interfaces.juju.solutions
<cory_fu> I'm not sure if there's a charm for it, but there ought to be
<bdx> cory_fu: I see, so if I was to run it locally, I could just login and manually add my private repo interface entries then eh?
<cory_fu> Right
<bdx> ok, nicee
<lazyPower> cory_fu - not at this time
<lazyPower> cory_fu - there was a TODO i took to writ eone for it, but i've since been busy with the k8s work. However we've also talked about just running it in k8s as a manifest since its a cloud native app as it were
<lazyPower> i mean it uses mongo as its backing store, so its webscale already right? thats like CN right?
<cory_fu> ha
<lazyPower> bdx - lmk if you need any help with that. as i'm the current maintainer of the interfaces instance
<siva_guru> @tvansteenburgh, yes it has the shbang line #!/usr/bin/env python
<tvansteenburgh> siva_guru: well that's the problem
<tvansteenburgh> siva_guru: the script is trying to use python2 to install python2
<siva_guru> @tvansteenburgh, should I remove it as a solution?
<siva_guru> this works fine in trusty though
<tvansteenburgh> siva_guru: python2 is not installed on xenial by default
<tvansteenburgh> siva_guru: you could try running the script with python3 instead
<stormmore> siva_guru: try changing the shebang line to #!/usr/bin/env python3
<siva_guru> @stormore, I will try that
<stormmore> siva_guru: you might running into other problems so I would recommend updating your code to python3
<siva_guru> @stormore, what is the default python version  that will be used if don't put any shebang in the code?
<tvansteenburgh> siva_guru: it won't work at all
<stormmore> a linux system won't know which interpreter to us
<stormmore> use*
<stormmore> https://en.wikipedia.org/wiki/Shebang_%28Unix%29
<bdx> lazyPower: thx, will do
<bdx> https://git.launchpad.net/layer-apt/
<bdx> http://paste.ubuntu.com/23908017/
<bdx> `charm build` is failing me
<bdx> bc ^^
<bdx> ahh its back now
<bdx> looks like git.launchpad was down for a moment
<lazyPower> bdx gremlins
<skuda> Hello everyone!
<siva_guru> @tvansteenburgh, @stormmore the old error is not coming anymore
<lazyPower> progress \o/
<lazyPower> o/ skuda
<siva_guru> but I find that other hooks are getting run as part of install
<siva_guru> should they be modified as well
<siva_guru> should they be modified as well?
<lazyPower> siva_guru - I presume you're using the layered/reactive approach to charming?
<siva_guru> No.. I am not using reactive model
<skuda> I am trying to use Juju to deploy canonical kubernetes but I think I am missing something, I have four bare metal servers rented in a hosting provider, I don't have access to ipmi (or should I install a DHCP server for that matter)
<siva_guru> @lazyPower, No.. I am not using reactive model
<skuda> Should i not be able to use Juju and deploy to me dedicated servers? those server have Ubuntu Xenial installed and everything working fine
<lazyPower> siva_guru - 1 SEC
<lazyPower> skuda  - you certainly can. If you don't have a functional cloud API that juju integrates with, you can certainly use the manual provider. Its less automatic than we would like, but its certainly possible to enlist those machines manually into a model and deploy CDK to them
<skuda> I would love to be able to install to those servers using LXD for example, or directly using the Ubunto Os installed
<lazyPower> however with only 4 bare metal servers, you might be better served by kubernetes-core, as it has fewer machine requirements
<siva_guru> @lazPower, @tvansteenburgh, how come the @hooks.hook("contrail-control-relation-joined") is getting called as part of install
<lazyPower> skuda - you can do both
<skuda> ahh I am always redirected to MAAS when reading about bare metal
<lazyPower> skuda - yeah, we prefer maas as the substrate for reasons that allow you to treat those bare metal units as a cloud, like, a proper cloud, not a manually managed cloud.
<skuda> speaking about kubernetes when launching conjure-up I am only offered localhost or MAAS
<stormmore> skuda that is cause MAAS gives Juju the "cloud" API layer
<lazyPower> skuda - but MAAS does have some assumptions there, that it will manage your DNS, and IPMI, and other settings at the metal layer, because you're basically modeling the machines in maas
<skuda> How should I "manually, it doesn't matter" instruct juju or conjure-up to use my servers?
<lazyPower> skuda - i do believe you need to add other cloud credentials in order ot see the other substrates, i may be incorrec ton that though
<lazyPower> mmcc stokachu ^ any feedback here on my statement? am i wildly misinformed?
<skuda> I understand what MAAS brings to the table, and I see the value, I would use it if I were controlling my datacenter, but I am not :(
<lazyPower> skuda - so you have some options here, you can juju bootstrap a manual provider controller, and enlist each machien 1 by 1, and then deployd irectly to them
<stormmore> skuda I don't think conjure-up is going to be the best method for you to install with
<lazyPower> but as stormmore is alluding to, you're probably not going to be able to get a good experience with conjure unless you want 4 independent clusters, one per machine, all in lxd
<skuda> Ok, I can manually deploy the units, no problem
<lazyPower> you can use placement directives in a bundle to control how your applications are deployed, and that seems like the better bet
<lazyPower> skuda , i would however encourage you to try a lxd based deployment locally first to get familiar with how its put together
<lazyPower> skuda once you've got that intial poking done, figur eout how you want the applications arranged on what machine, and then you can export a bundle and re-use it in your manual deployment
<skuda> Still have to familiarize myself a little bit more with Juju but it should not be a problem if I can create a manual provider controller someway
<lazyPower> skuda - so i'm going to be traveling over the next week, but i'll make sure i pop in here to see how things are going. If all else fails, make sure you mail the juju list juju
<lazyPower> argh
<lazyPower> juju@lists.ubuntu.com, and i'll monitor it like a hawk to help you through the manual deployment or any questions you have about the lxd initial poking
<lazyPower> but thats my suggested route, is to deploy on lxd first and get a feel for it
<skuda> placement directives... ok.. I will check all that information, thanks
<lazyPower> then go for the manual step, as manual denotes, if something gets botched, you're likely to have to wipe the model, then reinstall each machine base OS + re-enlist in a new model
<lazyPower> and thats time consuming
<lazyPower> and i want to be respectful of your time/effort
<skuda> I will lazyPower, tomorrow I will deploy in local LXD
<lazyPower> awesome \o/
<lazyPower> and you'll get the conjure experience there
<lazyPower> i'll see if i can talk to adam about the conjure bits while we are in ghent, maybe there's a better story there
<lazyPower> as more people are showing up with BM clouds, this is going to be a growing concern
<skuda> it seems pretty awesome juju and conjure
<lazyPower> we can probably get this somewhere on the roadmap at some point and try to come up with something better than "fork the bundle and make edits"
<skuda> I wanted to try OpenStack too because we are choosing the best tool for our project
<lazyPower> yeah man, you can openstack on lxd too if you have the horsepower
<lazyPower> great way to poke at it and see if you like it
<lazyPower> very cheap to experiment
<skuda> and those are two really complex beast that seems to be muuuuuuuuch easier in Juju, it's awesome, I hope everything works fine in my tests!
<lazyPower> skuda - if not, i want your feedback
<lazyPower> positive/negative/indifferent, it all helps
<skuda> I will send to the mailing list any roadblock I found, sure
<lazyPower> :D
<lazyPower> fan-tastic
<lazyPower> glad i ran into you then :D
<lazyPower> siva_guru - ok sorry about that, i'm very passionate about k8s
<stormmore> I was going to suggest that maybe an OpenStack cluser would be a good solution to put down first and then install CDK in VMs on it
<lazyPower> siva_guru - so, its a classic charm, with a single hook file i presume symlinked?
<lazyPower> and your *-relation-joined hook is executing during the install phase?
<siva_guru> @lazyPower, yes it is a single hook file symlinked
<skuda> In reality what I would love to have is a solution with a good UI able to manage LXD containers with live migration and ZFS deduplication, but it seems really difficult to find
<lazyPower> siva_guru - i would presume one of two things has happened
<lazyPower> 1) tehre's some dirty state on the unit (least likely culprit)
<skuda> So I am right now testing different options to be as close to possible to what we want
<siva_guru> @lazyPower, yes it is a single hook file symlinked and yes relation-joined hook is getting called during install phase
<lazyPower> 2) there's a code error somewhere in the code thats falling through and executing that hook stanza
<catbus1> I saw K8 in lxd.. Just wanted to share my recent experience on this. I followed https://www.stgraber.org/2017/01/13/kubernetes-inside-lxd/ and there are only two things I need to change for a successful deploy.
<lazyPower> siva_guru - like perhaps the method itself is being invoked directly
<stormmore> skuda LXDs seem good from a systems stand point but Docker containers / Kubernetes is more dev friendly
<lazyPower> from withiin the install() block
<catbus1> one is to add "local:" prefix to the lxd container name,, which is 'kubernetes' in stgraber's example.
<lazyPower> man stgraber is a beast. just sayin
<lazyPower> that guy is like the local container legend 'round these parts
<skuda> stormmore: true, but LXD offers live migration and docker not, not yet at least
<catbus1> The other is to limit the zfs pool size so that you don't run out of disk space on the system. I used my 3-year old laptop, but if it's a rental from a data center, you probably don't have to worry about this.
<lazyPower> skuda <3 you get it
<siva_guru> @lazypower, the same charm code works fine in trusty... I am seeing this issue in xenial
<lazyPower> siva_guru - doesn't seem like series would cause the weirdness though.
<lazyPower> siva_guru - i guess py2 vs py3? but i would have expected to see things like type errors and syntax errors, not random hook execution
<stormmore> skuda: not sure they will offer live mitigrations in Docker, seems their view is you should be running multiple instances of your service and make the service handle a lost of an instance
<siva_guru> @lazpower, yes I moved from py2 to py3
<skuda> well and the state, sometimes I don't want to use slower cluster filesystems just to be able to put everything on top of docker
<lazyPower> siva_guru - thats what i'm saying, i'm thinking outloud with you here. as we dont have hook code to look at, its very hard to debug
<lazyPower> siva_guru - so the best i can do is offer thoughts while you debug
<skuda> I mean Docker it's awesome and everything, we use for many stateless apps and the orchestration of those services it's awesome, but it's not the solution for everything I think
<siva_guru> @lazypower, is this a bug or is this something I need to fix in my charm code to make it work with py3
<lazyPower> stormmore - i think there's room for both in your DC/workflow. LXD is amazing at handling just about every class of workload, docker is engineered and sold as a very specific class of workload.
<stormmore> skuda that should be handled by replication between the instances
<lazyPower> siva_guru - well without seeing the code, i can only guess
<lazyPower> siva_guru and i'm going to guess its in the hook code
<stormmore> lazyPower oh I definitely agree :)
<siva_guru> @lazyPower, do you need my code or are you talking about the juju code?
<skuda> In the project we are creating right now, minecraft servers, we are speaking about big IOPS needs and a lot of state, but only 1 instance needed per server
<lazyPower> siva_guru - i mean your code, the charm code you are working on thats exhibiting the bad behavior
<lazyPower> skuda ooooh man
<lazyPower> skuda would you like an ARK server workload for k8s to test with?
<skuda> So I can't not make a good usage of all this amazing replication sets
<lazyPower> i just wrote the manifest for that a coupel weeks ago for my homelab and my friends and I have been beating on it quite furiously. we are quite enamored with how well its performing
<skuda> sure
<lazyPower> skuda - you sound like you could get away with just charming up the workload, and juju deploying it directly into lxd
<skuda> yes, I think so
<lazyPower> some minor network config i believe will need to happen on the host to forward things correctly, but thats minor, and we can totally get you up and running on just lxd and juju in short order
<lazyPower> and that networkign bit should be sorted in one of the forthcoming juju releases, we have even more networking goodness in the oven afaik
<skuda> right now I have two options, with different tradeoffs about this project
<lazyPower> dont hold me to that, but i'm like 80% certain that is the case
<skuda> use OpenStack and manage LXD as virtual machines, managing live migration, using local SSD, good
<skuda> use k8s being able to do crash recovery on a cluster filesystem like scaleIO
<skuda> no live migration for k8s
#juju 2017-02-02
<lazyPower> wellllllll
<skuda> different tradeoffs, just testing now
<lazyPower> thats not *entirely* true
<lazyPower> you would replica it to another unit, and that filesystem would be networked right? so the state is still available
<siva_guru> @lazyPower, here is the hooks code
<lazyPower> and if you use a deployment its a blue/green
<siva_guru> http://paste.ubuntu.com/23908126/
<skuda> yes, slower, but there
<lazyPower> skuda - here's the ark manifest https://gist.github.com/a55050f50fde8daf11434a09023eef8f
<skuda> hehehe
<lazyPower> skuda oh you bet, lxd live migration would just blow the stuff and things out of the water there
<siva_guru> @lazypower, I seeing the following error in the logs
<siva_guru> 2017-02-01 23:45:45 INFO install   File "/var/lib/juju/agents/unit-contrail-analytics-0/charm/hooks/install", line 92 2017-02-01 23:45:45 INFO install     print "NUM CONTROL UNITS: ", len(units("contrail-control")) 2017-02-01 23:45:45 INFO install                               ^ 2017-02-01 23:45:45 INFO install SyntaxError: invalid syntax
<lazyPower> and not look back
<skuda> but I can not do a live migration, all the players will be kicked during the migration window
<lazyPower> right
<siva_guru> @lazpower, the same code works fine with py2
<lazyPower> its possible to CRIU in docker, but most of the demo's i've seen of this have not been k8s
<skuda> on the other hand it's pretty neat to know that k8s is going to relocate everything automatically when something fails
<lazyPower> its been pure docker, with some wizardry in the backend thats not been shared
<skuda> thanks for the manifest lazyPower
<lazyPower> np skuda, if you want the docker source too (like you dont trust me, which you shouldnt, i'm a stranger) i can send you over the dockerfile
<skuda> I can check the Dockerfile in the registry, no?
<lazyPower> i dont think i published it
<skuda> ahhh
<lazyPower> i think i just docker pushed because i too like pain
<skuda> hahaha
<lazyPower> siva_guru  looking now
<skuda> ok, then... if you could send me it the Dockerfile would be awesome
<skuda> :)
<lazyPower> siva_guru - that error is py3 complaining that you didnt paren your print statement , it should read:  print("NUM CONTROL UNITS: " + len(units("contrail-control"))
<skuda> there is not another cluster aware ui for LXD other than OpenStack, isn'it?
<lazyPower> so thats a python3 error, not a hook execution error, it hasn't actually executed that bit, python is interpreted
<lazyPower> skuda - let me get back to you on that one
<stormmore> I wonder if k8s will do live migrations ever, don't think so cause of that assumption of being able to suffer a lost of a container temporarily until it spins up another
<lazyPower> skuda - i know the guys over on flockport are using lxd, and there's some other stuff, but you do know that juju does lxd dontchya? :D
<skuda> I found some projects in github but all of them were about to manage one node
<siva_guru> @lazyPower, that's the minor thing. The thing I am concerned is how is the relation-joined hook getting called as part of install?
<lazyPower> siva_guru - install the python3 flake8 checker, and flake8 your code, it will help you catch all htose python3 errors
<skuda> hahaha yes lazyPower it's something to manage the cluster after it's created and get graphs, repeating tasks support and niceties like those.
<lazyPower> siva_guru - i see no evidence of it being called though, thats a perfectly acceptable error as python is interpreted, so it was looking through the code file before it executed to map its contorl flow
<lazyPower> siva_guru - python3 flake8 your code, and give it another go once you've resovled the python3 changes
<lazyPower> siva_guru if its still misbehaving i'll eat my hat and we'll take another look at why its misbheaving
<lazyPower> and i may not eat my hat, because shrimp taco's were delicious and i'm not hungry after eating them
<stormmore> lazyPower - video or it didn't happen :P
<skuda> stormmore: k8s will not have live migration in a time at least, they are very focused on services with more than 1 instance available at the same time, If you can trust that's always the case you don't need live migrations
<lazyPower> stormmore whyyyy did i know you'd have peanut gallery commentary after that? :D
<siva_guru> @lazyPower, thanks. Will do
<lazyPower> skuda - theres the whole class of workload thing again
<skuda> but it's not always the case, this is the reason why vms are not going to disappear sometime sooon
<stormmore> skuda yeah I know, especially considering who is behind k8s and their ethos
<stormmore> lazyPower - humor is the bread of life :P
<lazyPower> skuda https://gist.github.com/89f4c7596c0a8ee3c47422e63db1a23a
<skuda> thanks lazyPower
<lazyPower> np np
<stormmore> skuda but it is the case, Google proved that by running their whole environments that way
<lazyPower> unofficially, if that explodes you own both halves. but its a fun validation workload since you're already doing game servers
<lazyPower> might be fun ot mix it up and add ARK to the list, as private servers seem to be the way to go there
<lazyPower> unless you like pain
<lazyPower> then play on the official servers and enjoy the unfeddered RUST abusers
<skuda> google it's pretty much stateless, I mean stateless as https requests
<skuda> the don't usually keep sockets opened much time, it's about request after request
<stormmore> skuda I get that but they run stateful services the same way as stateless
<skuda> and that makes sense for them, sure
<skuda> well the industry is going after that now, and it's amazing for many use-cases
<stormmore> statless only gives you so much until you have to store something in a stateful service
<stormmore> I will admit that stateful services take a bit more planning when you are running them in containers
<skuda> well that the trap many people got caught, "orchestrate all the stateless nginx that you want, you are going to finally consume dynamo, or ebs, or 'put your favourite stateful service here' and pay for it big"
<skuda> but things are getting better slowly anyway
<skuda> I would like to see more usage and integration of LXD, it's a container with many good things from vms
<skuda> but Docker get all the attention
<lazyPower> ^ that
<stormmore> Docker is more muture by a long way
<lazyPower> i think what has hindered lxd adoption is the fact you dont get a native feel on other clients like osx
<lazyPower> it kind of requires an ubuntu rig to really shine. i'm sure someone will fight me on that
<lazyPower> but thats my 2 cents
<skuda> some people think the problem is that it is too tied to Ubuntu too
<lazyPower> as someone that walks the line of both
<lazyPower> i hear all those statements and i want to hug them and ask them to humor me
<lazyPower> but nobody ever does
<skuda> hahahaha
<stormmore> That goes to my comment about muturity for LXD
<lazyPower> i'm not sure i agree, but thats a matter of opinion anyway
<lazyPower> and if we agreed on everything stormmore we would be super boring
<skuda> it's a shame because I think that together Docker and LXD could make amazing things
<stormmore> heck I would have been bored already and probably moved to CoreOS or DCOS instead :P
<skuda> too many complex tricks are done today to be able to run stateful services in Docker
<skuda> LXD brings that in a super natural way, with the added plus of live migrations
<stormmore> skuda that I definitely disagree with. there is nothing to complex that you can't run it in docker containers
<skuda> sure not, you have two options for example to put online 1 mysql
<skuda> slower as hell network storage
<skuda> or superpricey
<skuda> or the second option, use a local storage of the docker node running it and keep the process always there
<stormmore> skuda live migrations are only useful if you are wanting to "service" the underlying hardware still doesn't help you in a failed hw scenario
<skuda> and well, in MySQL at least you could use Galera or other solutions to create a cluster and try to live with it
<stormmore> skuda Ceph for network storage
<skuda> Ceph it's pretty slow
<skuda> the latency it's usually terrible and the IOPS are not much better
<stormmore> that sounds like a badly configured Ceph setup
<stormmore> CERN uses Ceph for the storage requirements with Petabyte sized clusters
<skuda> http://cloudscaling.com/blog/cloud-computing/killing-the-storage-unicorn-purpose-built-scaleio-spanks-multi-purpose-ceph-on-performance/
<skuda> stormmore: I am not speaking about size here I am speaking about speed
<skuda> and I don't have the resources of CERN to install a cluster with hundreds of computers and drives
<skuda> some databases that are cloud aware easily restore state from instances that keep working after a partial crash, ElasticSearch for example
<skuda> it takes some time and a lot of bandwith but could be done with local storage easily with Docker
<stormmore> skuda they have clusters that do either 100 IOPS (about the same as local HDD) or 500 IOPS
<skuda> but not 100% of usages are ok with that pattern
<skuda> that's too slow for a medium/big database
<skuda> I have been using only SSD for databases like 2 years now, and before that only SAS disks, you need lots of IO sometimes
<skuda> the same for big minecraft servers
<skuda> I seen one super big survival full of people saturate 1 SSD
<skuda> those types of workloads are not designed to be put in Ceph
<skuda> but works amazingly well in local SSD using LXD for example
<skuda> I know live migration it's not going to solve many things (failures) that k8s solves without proper (external to lxd) clustering thought in your part
<skuda> in the project I am working now it would suppose to be able to migrate minecraft server between nodes without interruption
<skuda> obviously, the fronted, admin, api and all the webservices will be running in Docker containers orchestrated via k8s or dc/os
<lazyPower> skuda - using mcserver (or is it mcadmin? i forget) as the admin ui i assume?
<skuda> depending on the tests I will be doing the coming days maybe even the minecraft servers will be split in smaller units, as small as possible, and orchestrated via k8s or dc/os, it's one of the options I will be testing.
<skuda> nope, we are developing one
<lazyPower> oh nice
<skuda> the most used it's multicraft I think
<lazyPower> man i love it when people show up with their own solutions
<lazyPower> that to me, is far more interesting than say, hopping on github, finding a thing, and then finding a way to profit from it
<skuda> we tried but we doesn't solve all our needs and introduce some problems
<lazyPower> yeah
<lazyPower> i used mcadmin (i think again? naming?) and it was a shitshow when it came to backups and specifically the restore
<siva_guru> @lazypower, that resolved the issue
<lazyPower> every last single one of them was corrupted
<lazyPower> siva_guru FAN TASTIC!
<siva_guru> Thanks for all your help
<lazyPower> siva_guru thats what i'm talkin bout boooyaaaaa
<lazyPower> np np
<lazyPower> happy to get you unblocked :)
<siva_guru> ;)
<siva_guru> :)
<bdx> lazyPower: "i think what has hindered lxd adoption is the fact you dont get a native feel on other clients like osxb" - entirely
<skuda> hahaha, similar problem for multicraft, backups sucks, but not only that, some other things are not fully working or very weird
<lazyPower> siva_guru - it can be tough going sometimes, especially when you're making changes you dont fully understand. sorry that bit you, but pthe py2->py3 change was a painful one for me at first until i started linting *everything*
<stormmore> skuda I don't know about that, 15GB/s seems pretty good even for large scale DBs
<lazyPower> bdx <3 hey dude
<lazyPower> wb
<siva_guru> @lazypower, yes.. I moving from trusty to xenial and from py2 to py3
<skuda> stormmore, 15Gb/s? where? with how many disks?
<stormmore> https://cds.cern.ch/record/2015206/files/CephScaleTestMarch2015.pdf
<skuda> you are not going to get 15Gb/s without special network hardware anyway
<skuda> the total bandwith of the cluster doens't matter a lot
<skuda> what's matter is how much will be getting my small Mysql instance in one node
<skuda> and it's totally imposible to get more bandwith that your network card offers you, usually 1Gb, 10Gb in special situations
<bdx> haha - just reading the scrollback ... docker has an os x virtualbox wrapper now ... even though the docker containers aren't really being deployed to the osx host, it gives devs the feel/usability as if they were running native
<stormmore> skuda sure you can, you can bond NICs. My understand last I really looked at CERN was they were using 10Gb x 2 for each side of their cluster
<skuda> 150 clients
<lazyPower> bdx - s/virtualbox/xhyve/
<bdx> lazyPower: I'm assuming thats what you are referring to?
<lazyPower> ftfy
<lazyPower> yeah, their xhyve schenanigans
<stormmore> skuda well it is a 30PB cluster!
<bdx> ahh yea, my bad
<skuda> DuringÂ MarchÂ 2015Â CERNÂ ITÂ­DSSÂ provisionedÂ nearlyÂ 30Â petabytesÂ ofÂ rotationalÂ diskÂ storageÂ 
<skuda> forÂ aÂ 2Â weekÂ CephÂ test
<lazyPower> "Oh look its native!!!"
<lazyPower> dude...
<lazyPower> its boot2docker in a dress
<lazyPower> stop lying to me docker inc
<lazyPower> but i'll give them this
<lazyPower> it works really wel and its gotten a ton of bug fixes
<lazyPower> i prefer it over docker-machine now
<skuda> stormore I don't have 30 petabytes of disks to be consumed by 150 clients at the same time hahahaha
<skuda> If I had this cluster size, probably I would be fine with Ceph, yes, but for my use case I would be better purchasing a good san before that
<stormmore> skuda I get that, just pointing out that Ceph isn't as slow as you think. If it is, the design of your Ceph environment is wrong
<skuda> did you check the comparison with ScaleIO I sent to you?
<bdx> stormore: +1
<skuda> I am speaking by the way of Ceph clusters not at the sale of CERN, much more smallers ones
<skuda> *scale
<skuda> BTW in the cern test at 15Gb/s every client is getting 100Mbit/s
<skuda> that with 150 clients serving and writing 4Mb files, so highly sequential
<skuda> if you think that's ok for a big OLTP database we have different opinions
<stormmore> oh I am aware of ScaleIO and it has a different approach that Ceph. I am only considering Ceph and it checks off more boxes for my workloads than ScaleIO
<skuda> blksize	mode	threads	trans/sec	req/sec	min_req_time	max_req_time	avg_req_time
<skuda> 16384	seqwr	16	122,57Mb/sec	7844,22	0,07	1484,69	2,04
<skuda> that it's a sad and old intel ssd 320
<skuda> 122Mb local, 1 disk, latency 2,04
<skuda> it's obviously much slower than current generation SSD or nvme
<skuda> still it's faster than what 1 client is able to get from that super big ceph cluster of CERN
<skuda> it's not needed for every case, sure, sometimes it is
<skuda> I am not saying ceph is not a cool tech that can work in many many cases
<lazyPower> ok i need to run some errands and i'm going to be traveling for the next few days until the 8'th. So hit me up on the mailnig list if you gents need anything. Otherwise i'll try to check for pings but replies are going to be super latent
<lazyPower> good luck in your exploration skuda, i'm here to help if needed
<lazyPower> stormmore - keep fighting the good fight
<skuda> only saying it's not the solution for everything
<lazyPower> bdx - poke magicaltrout in the forhead for me ;P that wiley brit
<stormmore> lazyPower always and have fun in Belguim
<skuda> lazyPower: Thanks! I will contact you if I hit roadblocks!
<lazyPower> s/me/the mailing list/
<lazyPower> ftfy
<lazyPower> <3
<skuda> yes!
<skuda> mailing list, I know!!
<skuda> I am going bed now too, it's 2am here in Spain ;P
<skuda> I will try tomorrow juju k8s, before in LXD with conjure, later I will try to get it to install on the 4 dedicated servers I have to test
<lazyPower> skuda - if you've go tthe time we will be in ghent belgium. you're more than invited to attend the charmer summit and we can run deployments in real time
<lazyPower> and with that i'm leaving for real this time
<Teranet> Question : juju status gives me a bit to much on info is there a way I can filter it so it only list me out the Unit's I have deployed ???
<lazyPower> Teranet: try 'juju status $application'
<lazyPower> Teranet or `juju status --format=short`
<Teranet> thx still not really what I like to see but better
<lazyPower> Teranet - if there's another filtered view that would be useful for you, if you dont mind filing a bug its likely to get included in the list of filters. you can see what kind of enhanced status outputs we have available via juju status --help
<Teranet> This is almost perfect : juju status --format=oneline     just more like a table look would be nice
<Teranet> with color
<lazyPower> excellent, glad you've found something that works better for you
<lazyPower> but thats good feedback, and again a bug would be handy to reference when talking about the feature with the core devs :)
<Teranet> I certainly can file a bug is I know where this could be filed best within the juju github bug report
<lazyPower> https://bugs.launchpad.net/juju/+filebug   would be preferrable
<Teranet> ok will do thx
<lazyPower> Thanks Teranet :)
<Teranet> reported it as detailed as I could  : https://bugs.launchpad.net/ubuntu/+source/juju-core/+bug/1661145
<mup> Bug #1661145: Feature request for juju status  <juju-core (Ubuntu):New> <https://launchpad.net/bugs/1661145>
<Teranet> now  ido have to still figure out why neutron and openvswitch won't do VLAN's for the Openstack setup on eth1 :-(   grrrr
<mhilton> morning all
<kjackal> Good morning Juju world!
<admcleod> kjackal: :]
<Zic> lazyPower: hi (NO, it's not a new problem, as usual :>), a simple wishfeature (if you confirm it's a good idea, I can officially submit): redirect http to https in the kube-api-loadbalancer
<Zic> I can do it on my own but as the vhost file is managed by Juju, it will be overwritten
<Zic> lazyPower: just for browsing content, I understand that kubectl cannot match the redirection but it is directly configured to https in the default ~/.kube/config
<chetann> Hello ,
<chetann> need help in juju
<chetann> anybody there?
<chetann> Hi , need help in setting ip version of kubernetes using juju
<chetann> Hi , need help in setting up version of kubernetes using juju
<Zic> chetann: hi, describe your problem precisely
<chetann> we are running this : juju deploy cs:bundle/canonical-kubernetes-20
<chetann> or let me ask in diffrent way
<chetann> in this charm : charm: "cs:~containers/kubernetes-master-10"   how to check what version of kubernetes master will going to provision
<chetann> when we deploy the above charm it deply 1.5.2 version for kubernetes master , but we wish to have 1.4.2 for kubernetes master
<chetann> when we deploy the above charm it deply 1.5.2 version for kubernetes master , but we wish to have 1.4.4 for kubernetes master
<marcoceppi> chetann: Hi, you can do that, but it's a bit manual. Let me dig you up instructions
<chetann> ok
<chetann> thanks
<lazyPower> Zic - I think thats a good contribution
<Zic> lazyPower: I had once again a "certificate error" in one of my k8s-master (saw this un /var/log/syslog) but was just for une pod (kube-dns), restarting this pod (by deleting it) just fixed the problem
<Zic> (the last time, all requests had this type of error)
<lazyPower> we need to figure out why thats happening
<Zic> not so important this time as I recovered quickly
<lazyPower> Zic - fyi i'm goign to be traveling until feb 8'th
<Zic> the kube-dns pod was in CLBO during this time
<lazyPower> starting later today
<Zic> ok, I have no troubles anyway, just to let you know if you see some other report about this :)
<lazyPower> Zic - i'll keep that in the back of my mind and try to come up with a suggestion for us to trace this issue
<Zic> thanks :)
<lazyPower> but as it stands right now, you're finding some edge cases we haven't seen in our long running instances, or testing
<lazyPower> so its hard to really recommend a fix until we truly understand whats happening
<Zic> the weird part is that it's only happened to kube-dns this time, all other requests that I saw was OK
<Zic> and it stopped when I deleted the pod and it respawned
<mbruzek> Hello Zic I see you are back at kubernetes today.
<lazyPower> o/ mbruzek
<mbruzek> \o lazyPower
<Zic> mbruzek: I didn't crash the cluster this time :D
<Zic> just saw a strange but quickly fixed error :)
<mbruzek> I have faith in you Zic, you just are not trying hard enough today. You need more coffee
 * lazyPower snickers
<lazyPower> Feel the internet-troll flow through you mbruzek. The troll exists in all of us.
<lazyPower> <3
<mbruzek> sorry. Maybe *I* haven't had enough coffee today.
<mbruzek> Zic knows how to break clusters better than anyone I know. I *like* that!
<mbruzek> I apprecate that and the feedback and the challenge.
<Zic> :D
<Zic> it's the only error I encountered in 1.5.2 :)
<lazyPower> mbruzek thats a constant state of being for me... lack of coffee
<Zic> hey, this message does not even contain a problem/error, AMAZING -> do you know how I can "clean up" the InfluxDB database? I have some old pods that's not existing anymore, and same for deleted namespaces
<Zic> I searched through InfluxDB docs but it's not very clear to me
<lazyPower> Zic - that seems like there's some latency or issue with etcd again if the pods aren't being reaped and namespaces are lingering
<Zic> oh, I thought that it was normal to conserve by default pods in InfluxDB as it can be used for history
<Zic> but here, in the drop-down list of Pods, I have some old entry of pods that didn't exist, and old namespaces :(
<Zic> I need to find some etcdctl command to explore what etcd have
<Zic> like a list of pods
<Zic> etcdctl ls / --recursive
<Zic> seems OK
<lazyPower> Zic - yeah, all of the k8s data is stored in /repository/
<lazyPower> and it tree's off down there based on object type
<Zic> I don't know if it's normal but I saw some old namespaces that's here
<Zic> but they do not contain any pods or ressources
<Zic> (and they are not shown in the kubectl get ns)
<lazyPower> i dont think it actually wipes the key-space
<lazyPower> i think it just wipes the values
<Zic> ok, it seems normal so
<Zic> I saw also some persistentvolumeClaim that not longer exist
<Zic> but as they are not returned by kubectl get pvc --all-namespaces, it seems OK also
<Zic> I don't know where InfluxDB get its obsolete pods :/
<Zic> it's not broken as new namespaces and new pods appeared in Grafana
<Zic> but the old one stayed
<Zic> and as I did so many tests, it's quite long now <3
<lazyPower> Zic - lets follow up on teh k8s mailing list to ask about this. I think its behavior of the addon
<lazyPower> if the authors indicate it should be getting wiped, we probably have something slightly misconfigured
<lazyPower> or some oddity
<lazyPower> not certain which
<lazyPower> but i'll err on the former
<Zic> it's maybe the behaviour yes, I was testing Prometheus (with Grafana also) in my old K8S cluster installed by kubeadm (shame! shame! I was not presented to Juju at this time :p) and Prometheus did the wipe
<Zic> I'm just realising now that InfluxDB has maybe a different behaviour on this point
<pranav_> Hey. Can anyone here help me with a query on hooks?
<perrito666> pranav_: ask the question and well see who can help you :)
<pranav_> Alright :). I have multiple relations in my charm that i need to wait on and I want my config-changed hook to be called after all the relations are done
<pranav_> is there a way that config-change can be called after relation hooks?
<perrito666> pranav_: until all the relations in one charm right?
<pranav_> yes. Right now I am moving my charm to blocked state when even one of the relation is not up
<pranav_> But once relations are done, I don't know how to automatically move to config
<perrito666> pranav_: mm i thought config-changed was called after relation is established
<perrito666> lazyPower: happen to know anything about this?
<pranav_> The documentation says its called after install & upgrade
<perrito666> rahworks: I see, I believe your option is to check in every relation
<pranav_> Ah ok. Will have to figure a way out. Can i use the status in any way to automatically trigger somethin in JUJU?
<pranav_> I did see the following way, but am yet to explore on it :
<pranav_> @when(âapache.installedâ) def do_something():    # Install a webapp on top of the Apache Web server...    set_state(âwebapp.availableâ)
<rick_h> pranav_: perrito666 is this a reactive charm? if so you could use state for this right?
<rick_h> pranav_: so you can track the state of each relation and then @when.... each is up execute
<pranav_> I haven't checked what reactive charm is. Any pointers to read on it?
<rick_h> https://jujucharms.com/docs/stable/developer-event-cycle
<rick_h> pranav_: ^ for some beginner notes
<rick_h> pranav_: lots of folks working on charms have experience on the mailing list and the #juju freenode channel
<rick_h> pranav_: but it's kind of a framework to help track state and make charming a bit easier
<Zic> mbruzek: h/34
<Zic> oops
<mbruzek> h/42
<Zic> that free hl... don't know why your nick was on my IRC prompt :)
<mbruzek> No problem.
<pranav_> I did read up the event thing but couldn't find anything on the reactive thing. But i will go through it once and get back post some reading. Thanks Guys! :)
<Zic> mbruzek: to eliminate immediately this use of unwanted hl, I have a question :-] -> we're right that running kubectl command on random kubernetes-master (locally for example, or by modifying the ~/kube/config of your workstation to a master directly instead of the kube-api-loadbalancer) cannot do anything wrong?
<Zic> because I saw in juju status that there is an official "master" of... masters
<Zic> but as the nginx vhost of kube-api-loadbalancer just have an upstream { } block, I think it's just a roundrobbin, right?
<mbruzek> Zic: The load balancer is out attempt at making the masters HA.
<mbruzek> Zic: You can scale up your master nodes separately than the worker nodes, and request different sizes from Juju
<Zic> yeah, but I saw a system of "lock" in /var/log/syslog which said only one of my master is having a "lock"
<Zic> is a notion of "active" kubernetes-master? or they are all active?
<mbruzek> Zic: To answer your question more directly. Yes you can point to a master directly in the configuration
<Zic> ok
<mbruzek> zic: should you lose that node, it will not work.
<Zic> I feared that I didn't understand something and do nasty things by running sometimes to a master which is not "the active one"
<Zic> mbruzek: yeah, I'm just using this when I don't have the kubectl binary locally
<Zic> I'm SSHing directly to one master and use its kubectl command
<Zic> this message was responsible of my question: leaderelection.go:247] lock is held by mth-k8smaster-03 and has not yet expired
<ryebot> Zic: Shouldn't matter. All of the masters use the same source of truth.
<ryebot> Zic: what was that in response to?
<Zic> because I have this kind of error sometime in the non-locked masters, but no error at all in the locked one: jwt.go:239] Signature error (key 0): crypto/rsa: verification erro / handlers.go:58] Unable to authenticate the request due to an error: crypto/rsa: verification error
<Zic> it's not like the first time where all requests have this in return
<Zic> here, is just... "sometime" in /var/log/syslog
<Zic> all my kubectl command works perfectly, the dashboard too
<ryebot> Zic: Hmm, not sure what's causing that, but I can tell you with a lot of confidence that it shouldn't matter from where you run kubectl, they all point to the same place
<Zic> ok
<Zic> I really don't know what can I do with this crypto/rsa error, all is working actually, but I'm fearing a bit
<ryebot> Zic: Can you paste the logs for us somewhere to look at?
<Zic> yep
<Zic> http://paste.ubuntu.com/23912041/
<Zic> there is ~5 examples in this extract
<ryebot> Zic: Thanks, we're taking a look
<ryebot> Zic: The lock logging, at least, is normal and expected. Looking into the error.
<ryebot> Zic: Did you by any chance change the service account token signing key?
<Zic> ryebot: nope, my only operation since the restauration of this cluster was testing StatefulSet :)
<ryebot> Zic: okay, cool; still investigating.
<Zic> ryebot: for the record, the last weeks I have a ton of error like that, not just a bit, in all masters, and all operations was completely blocked if it involve writing (like kubectl create/delete), reading was OK (get/describe)
<Zic> ryebot: here, I just have some, and the "locking" master does not have any
<Zic> all is working actually, I'm just fearing it will come again :x
<ryebot> Zic: understood, it's a reasonable concern
<Zic> ryebot: another maybe useful information, I had kube-dns which show a "30" in the Restart column of kubectl get pods
<Zic> don't know if it seems high
<Zic> ryebot: I'm leaving my office but I'm staying on IRC as usual, feel free to ping back me if you discover something; and thanks for your involvement as usual :)
<Mac_> Hi, I'm using "juju charm get" to download the charm. But I'm not able to download some of the charm, e.g. keystone, neutron-api, and some others.
<Mac_> But I can deploy them directly from charm shop.
<Mac_> $ juju charm get keystone
<Mac_> Error: keystone not found in charm store.
<Mac_> Any suggestion?
<magicaltrout> just did my DC/OS office hour demoing juju, quite a few folks on the call and she said she's gonna chuck the video around internally because they're on a big ease of use drive internally
<magicaltrout> so I better get those Centos base layers working....
<rick_h> Mac_: try just charm get? You using the charm snap?
<rick_h> Mac_: actually the command is "pull" in there now.
<rick_h> charm pull keystone
<Mac_> charm get result in the same error
<Mac_> $ charm get keystone
<Mac_> Error: keystone not found in charm store.
<rick_h> Mac_: I think you've got a really out of date tool as get is no longer a valid command
<Mac_> $ charm pull keystone
<Mac_> Error: pull is not a valid subcommand
<Mac_> I'm working on Ubuntu 14.04.5.
<Mac_> I'm trying to patch the charm for my environment, therefore need to make local charm repo.
<rick_h> Mac_: oic, hmm. I think the new charm command is only available as a snap these days.
<rick_h> Mac_: maybe just download the zip file from the page https://jujucharms.com/keystone/
<rick_h> Mac_: look on the right column by the file listing for "Download .zip"
<Mac_> So the zip is the same as the "charm get"?
<Mac_> Will try, thanks.
<Mac_> rick_h: thanks.
<rick_h> Mac_: yes, it's the zip in the store for that charm
<Mac_> Another question, does the "juju deploy" can resolve the series and revision, e.g. "cs:trusty/percona-cluster-31" , with the downloaded zip or the dir by "charm get"?
<Mac_> Or shoud I select series and revision before download?
<rick_h> Mac_: what version of Juju are you on?
<Mac_> $ juju --version
<Mac_> 1.25.9-trusty-arm64
<rick_h> Mac_: so for 1.25 you need to setup a charm repo directory structure that has the charm in a directory called trusty
<Mac_> .
<Mac_> âââ trusty
<Mac_>     âââ ceilometer
<Mac_>     âââ ceilometer-agent
<Mac_>     âââ glance
<Mac_>     âââ mongodb
<Mac_>     âââ nagios
<Mac_>     âââ nova-cloud-controller
<Mac_>     âââ nrpe
<Mac_>     âââ ntp
<Mac_>     âââ rabbitmq-server
<Mac_> Like this?
<Mac_> And I have something like "cs:~cordteam/trusty/neutron-api-4"
<Mac_> also need ./~cordteam/trusty/ ?
<Mac_> It seems the charm ./revision is not auto generated.
<Mac_> for example, cs:trusty/ceilometer-240, but the ./revision is 44
<Mac_> So if I deploy from cs, it shows 240, but if I deploy from local, it show 44
<Mac_> And the contents are also different
<rick_h> Mac_: so when you deploy a charm locally, it auto updates the revision as it can't tell what changes there are/etc
<rick_h> Mac_: when you go from the store, each upload to the store creates a revision and so the store is tracking it
<rick_h> Mac_: so there's a disconnect when you go from the store to a local files on disk
<Mac_> ok, I'll try "charm get" again, but I just did that this morning.......
<rick_h> Mac_: I'm sorry, you don't need to re-download
<rick_h> Mac_: if you deploy from local it'll just increment the number over and over
<rick_h> Mac_: there's absolutely no association to the revision you download from the store and the revision it shows once you deploy it locally to be honest
<Mac_> But I thought the charm deployed with "cs:trusty/ceilometer" and "charm get ceilometer" should both be the latest.....
<Mac_> And now I cannot "charm get", I think it's because I did "bzr lp-login".
<Mac_> $ charm get ceilometer
<Mac_> Branching ceilometer to /cord/build/platform-install/juju-charm/trusty/var
<Mac_> Warning: Permanently added 'bazaar.launchpad.net,91.189.95.84' (RSA) to the list of known hosts.
<Mac_> Permission denied (publickey).
<Mac_> ConnectionReset reading response for 'BzrDir.open_2.1', retrying
<Mac_> Warning: Permanently added 'bazaar.launchpad.net,91.189.95.84' (RSA) to the list of known hosts.
<Mac_> Permission denied (publickey).
<Mac_> Error during branching:  Connection closed: Unexpected end of message. Please check connectivity and permissions, and report a bug if problems persist.
<Mac_> Is it because I'm not a charmer?
<Mac_> And there's no lp-logout, so I'm stuk.....
<Mac_> @@
<rick_h> Mac_: so the issue is that charm get is from a time when all charms had to be put into launchpad bzr and then the store pulled them out of there
<rick_h> Mac_: but today, charms are uploaded with a newer charm tool (the snap) and can come from github, your own drive, etc
<rick_h> Mac_: that's why the "download zip" is your best bet atm
<rick_h> Mac_: so I'd not use anything pulled from bzr and I'd stop using the charm get command all together because it's just not current enough
<ryebot> Zic: after some investigation, we still don't have a solution. Would you mind opening a bug and tagging us in it so we can track it?
<ryebot> Zic: On our end, we'll keep investigating.
<Mac_> I see....
<Mac_> rick_h: Can the new charm tool (the snap) get old version of charm?
<rick_h> Mac_: yes you need to use the full URL to get an older version like cs:trusty/keystone-5
<icey> has anybody tried mixing bash + python in a reactive, layered charm?
<rick_h> icey: not seen it myself, what's got you thinking about the mix?
<icey> rick_h: a discussion we had on the openstack team a couple of days ago
<icey> rick_h: I couldn't come up with a way to make it work with a bit of thinking but figured that maybe somebody else had thought about it
<kwmonroe> icey: we've mixed bash actions with reactive py charms.. see https://github.com/apache/bigtop/tree/master/bigtop-packages/src/charm/hadoop/layer-hadoop-resourcemanager as the reactive py charm with actions/* as bash stuffs.
<icey> kwmonroe: I've done that kind of thing before, I'm wondering more something that actually mixes reactive bits
<kwmonroe> icey: which reactive bits?  you can do stuff like 'is_state' from bash, https://github.com/apache/bigtop/blob/master/bigtop-packages/src/charm/hadoop/layer-hadoop-resourcemanager/actions/mrbench#L20
<icey> kwmonroe: imagine a base layer that `apt-get install -y samba`, and then adding a python layer on top of that
<icey> for example
<kwmonroe> well your first problem is samba...
<magicaltrout> don't you use hdfs for everything these days?
<kwmonroe> don't you?!?!
<icey> ha kwmonroe
<icey> ok, how about `apt-get install -y squid`
<icey> point is mixing layers that use bash with layers that use python
<icey> in the actual reactive bits
<kwmonroe> icey: does your py layer need to react to things like "apt.installed.squid"?  i think that'll work -- stub would know for sure.
<icey> kwmonroe: part of the question then is how does the bash reactive stuff get called
<icey> given that both python and bash reactive bits may want to execute on each hook
<kwmonroe> stub: if i apt install squid in a bash layer, and include that bash layer in an upper python layer, will @when(apt.installed.squid) recognize that squid was installed and be set?
<kwmonroe> icey: as for "how does bash reactive stuff get called", it happens with calls to 'charms.reactive x'
<kwmonroe> charms.reactive is a bash script available on anything that has charms.reactive in its wheelhouse
<icey> kwmonroe: but how would my `squid.sh` get executed so that I could call `charms.reactive set_state('apt.squid.installed')
<kwmonroe> icey: i don't know what squid.sh is in this scenario, but any bash stuff that needs to set a state would do "sudo apt install squid; charms.reactive set_state good2go", and then you could react in a later layer with @when(good2go).
<icey> kwmonroe: let me make a super basic version and share, I thihnk it's confusing
<kwmonroe> roger that icey, but don't push my limits.  if you do, you'll have to answer to cory_fu.  all i know is that bash stuff can do reactive stuff by calling "charms.reactive foo", where foo is:  http://paste.ubuntu.com/23913564/
<icey> kwmonroe: I know about that, we'll see if this (super stupid) example can be made to work ;-)
<icey> kwmonroe: https://github.com/ChrisMacNaughton/layers_test
<kwmonroe> icey: by virtue having to do more than 2 clicks through that repo, i can tell you're going to need cory_fu.
<icey> HA
<icey> kwmonroe: I'm just goint to try that charm :-P
<icey> well, pair of layers, built into a charm
<cory_fu> icey: LGTM
<icey> cory_fu: you think that will actually work?
<cory_fu> Should, yeah
<icey> awesome :)
<kwmonroe> hey icey, LGTM.  that should work.
<icey> thanks kwmonroe ;-)
<icey> wow +1 cory_fu kwmonroe :) it works!
<icey> downright voodoo ;-P
<kwmonroe> icey: if you blog about your experiences, you'll need another 15 minutes of help.
<icey> why would I need help to blog about it...?
<kwmonroe> lol
<cory_fu> icey: Ignore kwmonroe's sass.  :)
<kwmonroe> icey: i fubar'd that.  i meant to say 'you'll *get* another', as if irc help was tied to evangalism.
<cory_fu> heh
<icey> hahaha
<icey> =! cory_fu
<icey> +1
<kwmonroe> you had it right.. hahaha != cory_fu.  he doesn't mess around.
<icey> thanks again guys :)
<Mac_> rick_h: thanks!!
#juju 2017-02-03
<ss_juju> 'juju resolved' command runs the hook if there are any errors
<ss_juju> Is there a way to run all the hooks in a charm from the beginning even if there is no error
<ss_juju> Is there a way to run all the hooks in a charm from the beginning even if there is no error?
<ss_juju> Is there a way to run all the hooks in a charm from the beginning even if there are no errors?
<stormmore> anyone able to get helm to work with CDK?
<stub> kwmonroe: The apt layer only knows about things you install via its API or layer.yaml. It does not know about any other packages (or you would end up with a few thousand unwanted states for all the packages in the base system)
<kwmonroe> ah, yeah, makes sense stub, thx!
<stub> kwmonroe, icey : It would be possible to make it set the states for a declared set of packages though without automatically installing them. But I haven't seen a real use case for adding that extra complexity.
<kjackal> Good morning Juju world!
<Zic> ryebot: received, do you think it's a k8s bug or a Juju one? just to know where I should fill the bug entry :)
<Zic> ryebot: what is weird is that if the key changed, why 90% of others requests are OK :x
<Zic> ryebot: I'm beginning to ask myself if putting the EasyRSA charm at the same machine of kube-api-loadbalancer could be a bad idea in this problem
<admcleod> asdasdad
<admcleod> .~.
<magicaltrout> thats the most insightful thing you've said in a long time admcleod
<admcleod> magicaltrout: asdasd!
<magicaltrout> thanks!
<admcleod> magicaltrout: asdasdasd?
<magicaltrout> Â¡sá´É¥Ê sÉ lnÉÇsn sÉ ÊlÉ¹ÉÇu
<admcleod> so clever
<icey> stub:  it wasn't really about the specific state name, the question was about mixing bash and python layers in a built charm (which worked beautifully!)
<ryebot> Zic: I think we can start it out as a k8s bug for now, we can redirect it if it turns out to be Juju. What are your thoughts on colocating easyrsa and loadbalancer?
<Zic> ryebot: I placed the easyrsa charm and the kube-api-loadbalancer on the same machine (by default, they are splitted in two machines)
<Zic> ryebot: is it a bad idea?
<ryebot> Zic: I'm not sure. I'm unaware of potential conflicts there.
<Zic> ryebot: because I think that's the only part that I custom (except the scaling) on CDK
<Zic> and I'm beginning to think it's maybe what you can't reproduce in your lab
<Zic> (regarding my random problem of "rsa error")
<ryebot> Zic: it may be worth investigating
<Zic> and as I'm reinstalling the whole cluster currently, and have ressources to pop one other VM... maybe I will go with the easyrsa charm completey separated :)
<ryebot> Zic: Okay, cool. Can you let me know if that solves the problem? I'll be around :)
<Zic> even if it canges nothing, at least I'm sticking more to your default
<Zic> changes*
<jacekn> hello. I have 2 questions. 1. https://bugs.launchpad.net/juju-core/+bug/1613992 says fix released but juju 1.25.8 is not availble in trusty. Any idea when it will be? 2. Is it t supporetd config to have juju 1.25.6 managing 1.25.8 machines?
<mup> Bug #1613992: 1.25.6 "ERROR juju.worker.uniter.filter filter.go:137 tomb: dying" <canonical-is> <cdo-qa-blocker> <landscape> <juju-core:Fix Released> <juju-core 1.25:Fix Released> <https://launchpad.net/bugs/1613992>
<Spaulding> hi jacekn
<cory_fu> petevg: https://github.com/juju-solutions/matrix/pull/71 is merged.  Thanks
<petevg> cory_fu: awesome, thank you. You have the wrong link to you PR on your ticket, but I have it open, anyway :-)
<cory_fu> Oh, carp.  Because it cut off the last digit
<cory_fu> Fixed.  Thanks
<petevg> np
<cory_fu> kwmonroe: Can you weigh in on https://github.com/juju/juju-crashdump/pull/3
<cory_fu> lutostag: Your input on ^ also welcome.  The motivation for this is that charms that use a venv or resources end up with very large crashdumps.  I started to suggest that /var/lib/juju be excluded by default, but I think petevg's right that it's useful info when running manually.
<jacekn> Spaulding: IDK if you were going to reply but I filed a bug: https://bugs.launchpad.net/juju-core/+bug/1661681
<mup> Bug #1661681: Broken agent complaints about tomb: dying <juju-core:New> <https://launchpad.net/bugs/1661681>
<cory_fu> lutostag, petevg, kwmonroe, kjackal: I wasn't aware of the 5MB file limit in crashdump.   Maybe it makes more sense to just blacklist the .venv directory instead of adding a -s flag?
<kwmonroe> cory_fu: or do both -s when you *know* you'll never need the /var/lib/juju (are there such occasions?), and tar --exclude .venv
<cory_fu> kwmonroe: I think petevg's point that verifying the charm code and unitdata state could be useful in debugging is a good one.  If we exclude .venv and the dump is a reasonable size, why not include it?
<petevg> cory_fu, kwmonroe, lutostag: another approach would be to only run crashdump when there has been a test failure, and then be more relaxed about the size of the resultant tarball.
<lutostag> petevg: I think only getting a dump on failure is probably best practice in general
<kwmonroe> +1 ^
<lutostag> petevg: and I think a small flag is a good idea, and should be extended, so I'll +1 on this PR too, it makes sense to have in general, but we will probably need a better heuristic than just leave out the juju data in the long run
<cory_fu> Also +1 to that but I think we should still consider excluding the venv.  At most, we'd want the output of `pip freeze`
<cory_fu> Ok, that's a reasonable point, lutostag
<cory_fu> Alright.  I think we're all +1 on this PR, then, and changing the charm to only capture crashdump on failure.
<petevg> Cool. Thx, everyone.
<lutostag> (just means my sos-report inclusion, https://github.com/lutostag/plugins/tree/crashdumps/sosreport I should probably upstream now, and change the flags :)
<petevg> cory_fu: merged your PR.
<cory_fu> petevg: Thanks
<petevg> np
<cory_fu> stokachu: I implemented the convention we discussed in Cape Town for enabling bundles to generate end-to-end work to verify the deployment, if you would take a look: https://github.com/juju-solutions/matrix/pull/75
<cory_fu> See if that convention would work in conjure-up as well
<stokachu> cory_fu, i think that would work, we just define end_to_end and put whatever tests we need in there
<stokachu> cory_fu, just some additional documentation around using this would be good
<cory_fu> stokachu: Note that the convention we need for matrix is for the bundle-provided end_to_end function or script to run forever, continually generating some sort of load.  Only if it returns / exits is it considered "failed."  That ok with you?
<cory_fu> stokachu: +1 on docs.  Not sure where the best place for them would be
<stokachu> cory_fu, yea that works for me
<cory_fu> stokachu, kwmonroe, petevg: https://github.com/juju-solutions/matrix/pull/76 for README docs about end-to-end and the other ways to extend Matrix
<curious-george> if you deploy a subordinate charm and say the parent charm is in 'blocked' state.. will the suborodinate charm begin deployment
<curious-george> if you deploy a subordinate charm and say the parent charm is in 'blocked' state.. will the suborodinate charm begin deployment?
<curious-george> I am not seeing the install hook of the subordinate charm getting hit
<stokachu> cory_fu, cool ui :D
<siva> I am not able to add juju model
<siva> ubuntu@juju-api-client:~$ juju add-model default ERROR failed to create new model: model "default" for admin@local already exists (already exists)
<siva> ubuntu@juju-api-client:~$ juju list-models CONTROLLER: juju-controller  MODEL        OWNER        STATUS     MACHINES  CORES  ACCESS  LAST CONNECTION controller*  admin@local  available         1      4  admin   just now
<Guest80176> No default exists
<Guest80176> Any help is much appreciated
<petevg> Guest80176: You could try running "juju destroy-model default" again, to see if that cleaned things up, but I have a feeling that it won't.
<petevg> Guest80176: I'd be tempted to destroy the controller, since it looks like a local controller, with no models, and re-bootstrap.
<Guest80176> @petevg, yes it does not clean up. I already tried that
<Guest80176> @petevg, I don't want to do that operation as it is destructive for me
<Guest80176> is there a bug?
<petevg> Guest80176: None that I know of. The next thing that I was going to do was ask you to file a bug against launchpad.net/juju
<Guest80176> @petevg, for now I created another model and proceeding
<Guest80176> Is there a way to manually clean this up?
<Guest80176> I meant going into the files system and removing certain entries
<petevg> Guest80176: creating another model makes sense. As for manually cleanup, that would depend on what broke, and I'm afraid that I'm not expert enough in the juju internals to know where to begin to look. :-/
<petevg> (My guess is that you have a stray entry somewhere in mongodb, though I haven't seen the model just not show up when you call list-models.)
<petevg> Anyone else have any ideas?
<Guest80176> @petevg, is there any log file I can take a look at to find the cause of this issue?
<petevg> Guest80176: you can run "juju debug-log --replay > somefile.log", and take a look at that. It may or may not have useful info, though.
<petevg> Guest80176: switch to the "controller" model before running it.
<petevg> (The controller lives in a juju model just like everything else, so you can use a lot of the standard juju debugging tools on it.)
<Guest80176> @petevg, I see the following error in the juju logs
<Guest80176> machine-0: 23:10:56 ERROR juju.worker.dependency "mgo-txn-resumer" manifold worker returned unexpected error: cannot resume transactions: cannot find document {settings 1df4e638-3914-45ef-821c-317041a73aec:r#12#peer#neutron-contrail/0} for applying transaction 5894ef0f13a69524b49d0710_5cd72db0
<petevg> Guest80176: that sounds like a mongodb error. Please file a bug at https://bugs.launchpad.net/juju -- that's the best way to get in front of someone who might be able to offer a workaround (or a fix).
<Guest80176> @petevg, do you need the full log?
<petevg> Guest80176: attaching the full log definitely won't do any harm :-)
<Guest80176> I meant for you to take a look at it now?
<Guest80176> @petevg, I meant for you to take a look at it now?
<petevg> Guest80176: right now, I am afraid that I'm running around packing for config management camp (I'm typing this on my phone). I think that it's safe to assume that you've run into a genuine bug, though, so filing it is probably the next best step.
<petevg> I don't know the mongodb bits well enough to walk you through fixing the error, if that's where it is.
<Guest80176> @petevg, OK. Sounds good
<Guest80176> Thanks
<petevg> You're welcome!
#juju 2017-02-04
<xnox> my conjure-up does not work at all
<xnox> I'm stuck on the "choose a controller or create new" screen and I cannot select anything or move the cursor at all
<tvansteenburgh> xnox: can you navigate with tab or arrows?
<tvansteenburgh> xnox ^
<xnox> tvansteenburgh, on all other screens i can.... not on that one.
<tvansteenburgh> xnox, stokachu is the conjure-up expert, maybe he can help you if/when he comes around
<xnox> purged juju configs and it gets further but is still broken
<xnox> I get loads of python tracebacks
<OnkelTem> Hi
<OnkelTem> What is jusu?
<OnkelTem> juju*
<OnkelTem> How does it differ from docker? from lxd/lxc? and from snaps
<OnkelTem> and what is containers in juju, charms and bundles dmanit
<aisrael> OnkelTem, this page would be a good place to start: https://jujucharms.com/docs/stable/getting-started
<aisrael> and https://jujucharms.com/docs/stable/about-juju
<Zic> ryebot: hi, just to let you know (I don't want to trouble your weekend :p), I have the "csa error" (the one which we talked the last time) on a brand new CDK cluster (full-default, except the scaling for master 1->3 and etcd 3->5)
<Zic> ryebot: I discovered that this new cluster does not have any NTP sync and the lock is completely on another clocktime for some machine, lazyPower told me once that it can be responsible of some problem, I will fix that part firstly
<Zic> s/and the lock/and the clock/
<spaok> hey all
<spaok> does anyone know anything about the hardnening options for charms? the site it refers to seems to be gone now
<spaok> Hardening.io used to have other info on it I thought, now it seems it's just ads
<OnkelTem> aisrael: thanks
#juju 2017-02-05
<stokachu> xnox, o/
<magicaltrout> 9:30 is it too early for beer?
#juju 2018-01-29
<bdx> ybaumy: https://paste.ubuntu.com/26480828/
<bdx> ybaumy: ^ super new and untested, but its working for me http://paste.ubuntu.com/26480831/
<bdx> ybaumy: you can modify the install_sources config for both of the elasticsearch and kibana charms to get them on 6.x if needed too
<bdx> but you must be on juju > 2.3.1
<bdx> >=
<ybaumy> bdx: which versions do i have to specify for kibana top and filebeat when running the kubernetes elasticsearch charm bundle
<kwmonroe> rick_h: do you know if the charmstore tracks who pushed a specific charm rev?  for things like telegraf, owned by ~telegraf-charmers, is there any data that could point to the actual user?
<kwmonroe> cory_fu: did we ever hatch a plan for preventing 'charm push' from pushing a layer to the store?  like, if -f ./layer.yaml && ! -f ./copyright.layer-basic; exit 1?
<rick_h> kwmonroe: hmm, not publicly. I mean it's just an ACL "does the user have permission to do this"
<kwmonroe> ack
<cory_fu> kwmonroe: There's not a specific plan, but the thing you would want to check for is .build.manifest.  It should be easy enough to add a warning and require --force or something if that file isn't present
<kwmonroe> cool
<cory_fu> kwmonroe: Oh, except that charm-push is part of the go code and not charm-tools
<kwmonroe> doh
<kwmonroe> same with charm release?
<cory_fu> kwmonroe: Yep
<cory_fu> Still shouldn't be too hard to do: https://github.com/juju/charm
<cory_fu> Just need to do it in go
<kwmonroe> heh, thx, i was mostly lost trying to find the repo.  who in the world would have thought ./juju/charm?  bonkers.
<rick_h> cory_fu: kwmonroe yea, https://github.com/juju/charmstore-client  I think is the cli bits
<cory_fu> kwmonroe: I'm not certain that's the right repo
<rick_h> cory_fu: kwmonroe charm is the definition of a "charm" in the juju model vs the cli
<cory_fu> rick_h: Ah, that's it.  Thanks!
<kwmonroe> oh mylanta
<sfeole> cory_fu, ping, hey I noticed there is not many recent commits to libjuju, is that being worked on?  had some questions on it if so
<kwmonroe> looky at rick_h!  https://github.com/juju/charmstore-client/issues/143 streets ahead my man.
<cory_fu> kwmonroe: Sorry for the chatter on that issue.  I misunderstood what you were saying
<kwmonroe> lol
<admcleod> sfeole: cory_fu good question
<admcleod> *prod*
<sfeole> tvansteenburgh, ^^
<tvansteenburgh> sfeole: yes we are still maintaining it
<sfeole> tvansteenburgh, cool thx
<admcleod> \o/
<petevg> Hey, kwmonroe, I remember you teaching me things about setting the series of a charm in a bundle. I've got a question for you, when you have a moment:
<kwmonroe> sup petevg
<petevg> Namely, if I am referencing a local charm, what are my options for setting the series?
<petevg> The charm lives in /home/<user>/charms/xenial/<the charm>
<petevg> And the default series for the bundle is set to xenial
<petevg> And the default series for the model is set to xenial.
<petevg> But the charm is multi-series, and deploys the trusty version.
<petevg> Do you know of any other flags that I can set to make it be xenial/
<petevg> ?
<rick_h> petevg: what's the preferred series in the charm metadata.yaml? Juju respects the preference there using order. The first in the list is preferred, then the next, etc.
<rick_h> petevg: the other thing is what cloud is it, as if there's no xenial image available (maas, etc) it might fallback to trusty?
<kwmonroe> ha!  i was just having a convo about local charms in bundles recently... petevg, what rick_h said is the first easy place to start... does <the charm>/metadata.yaml list trusty first?
<kwmonroe> (under the series: key in metadata.yaml)
<petevg> rich_h: trusty is listed first in the charms. We were going to try editing the charm to list xenial first, but that felt wrong :-)
<rick_h> petevg: right, so the bundle selection at the application level should override
<petevg> rich_h, kwmonroe: this is an openstack deploy on MaaS, but I think that the charms all have xenial versions for that environment.
<rick_h> petevg: I'm not 100% sure on the bundle override itself.
<kwmonroe> yeah ^^ me neither on *local* charms
<rick_h> I guess since it's a local charm...hmmm...maybe there's some bug in how that's getting pulled. Honestly though they should have xenial first imo :)
<petevg> kwmonroe, rich_h: cool. We'll edit the charms for now, and I'll dig into it more later. Thank you!
<kwmonroe> yeah petevg, do whatever you need to make xenail first feel right.
<kwmonroe> 'cause that's right
<petevg> I guess I can file lots of bugs against the charms in question, and then contribute a fix. ... which will be outdated as soon as bionic launches :-)
<kwmonroe> whoa whoa whoa there chief.  what's all this "lots of bugs" talk?  if a local charm isn't honoring the overall bundle series, that's one bug.  do not open more than one.
<kwmonroe> or else
<petevg> kwmonroe: I was going to fix the charms. But I guess fixing juju would be better. One bug it is, then :-)
<kwmonroe> heh, right on petevg.  the issue then would become whether or not a bundle.yaml series should override the charm metadata series.  and i'm not so sure it should, so now that you're all-in on one bug, maybe you should open lots of bugs to make your charms xenial first.
<kwmonroe> i'll stop jumping back-and-forth over this fence now and leave you with "do what feels right".
<cory_fu> sfeole: Sorry, I missed your ping somehow.  It's still being maintained, though there hasn't been as much need for changes to it recently.  What was your question about it?
<sfeole> cory_fu, hey,  my ? is around this bug: https://github.com/juju/python-libjuju/issues/201
<sfeole> cory_fu, i can't seem to get that to work, if i try the example in my comment: add_machine(spec="ssh:user@10.10.0.3")
<sfeole> cory_fu, i believe the error I get is "Model not found"
<sfeole> i'm thinking it may have something to do with my formatting
<sfeole> of course i ripped everything down, so i can't give you the exact error at the moment
<cory_fu> sfeole: Hrm.  I'm not sure.  I haven't done much with manual provisioning personally.  "Model not found" doesn't sound like it being an issue with the spec format, though
<sfeole> cory_fu, 1 sec , i'll get the code
<sfeole> cory_fu, https://pastebin.ubuntu.com/26484964/
<sfeole> cory_fu, code and output
<sfeole> admcleod, ^^
<cory_fu> sfeole: Hrm.  I think that doc string is wrong.  Let me see if I can figure out what it should be
<sfeole> cory_fu, thanks!
<cory_fu> sfeole: Bad news, I'm afraid.  It looks like much of the heavy lifting for adding a manually provisioned machine is done in the Juju client but is not reimplemented in libjuju.
<sfeole> cory_fu, no worries
<cory_fu> sfeole: The logic that would need to be added can be found here: https://github.com/juju/juju/blob/develop/environs/manual/sshprovisioner/provisioner.go#L20-L70
<cory_fu> Essentially, ensure an ubuntu user, add all of the local ssh keys to that user's authorized_keys, and then fetch and execute a provisioning script provided by the controller
<cory_fu> sfeole: ^
<sfeole> cory_fu, what about situations where I want to deploy a bundle in libjuju and the machines already exist?  I know that when using the command line, you need to add a paramter --use-existing=True  , or something along those lines..  I believe that's relatively new..  I'm assuming libjuju doesn't handle that yet though?
<sfeole> cory_fu, i found this: https://github.com/juju/python-libjuju/blob/master/juju/model.py#L1108
<cory_fu> sfeole: Correct.  Re-using existing machines from a bundle is a new feature that isn't supported yet.  That one, at least, should be an easy change.
<sfeole> ahh
<sfeole> cory_fu, ok ok
<sfeole> cory_fu, at the minimum i'll file a bug
<sfeole> cory_fu, thanks for your help
<cory_fu> sfeole: np
<cory_fu> I commented on that existing issue as well
#juju 2018-01-30
<kjackal> hey stub I am trying to work on a fix to layer-snap
<kjackal> is this layer on lp or gh?
<stub> either or
<kjackal> awesome
<stub> lp is canonical I guess, but I accept either
<kjackal> so I can submit a pr to gh
<stub> yup
<kjackal> great, thanks
<stub> kjackal: what is the fix?
<kjackal> has to do with no_proxy
<stub> ok. I think newer versions of snapd have a better way of setting the proxy now too, but I haven't investigated.
<stub> There is a branch in flight adding support for the snap-proxy too, but that doesn't touch the traditional proxy stuff.
<kjackal> remember how the snap layer gets the no_proxy settings from juju (I guess) and then places them to /etc/systemd/system/snapd.service.d/snap_layer_proxy.conf . There is the case where no_proxy settings have a sull subnet and this large string is larger than 2048 chars so it gets skipped
<kjackal> actually it gets trimmed to 2048 and the rest of proxy settings are skipped
<stub> https://code.launchpad.net/~adam-collard/layer-snap/+git/layer-snap/+merge/336289
<kjackal> stub I do not think this will fix the issue, let me find the opened issue
<kjackal> stub: https://github.com/juju-solutions/bundle-canonical-kubernetes/issues/413
<kjackal> the no_proxy line is too long and the parsing of the snap_layer_proxy.conf stops
<stub> No, that is a different issue. I understand it is the preferred way of accessing the snap store via proxy now
<stub> It does include the 'snap set core' command down the bottom, which I think is similar to the blessed way of setting http proxies
<stub> (so rather than create a systemd override file, with the 2048k limit, it might be possible to run 'snap set core proxy.something=' and make it snapd's problem
<stub> But I don't know where the core settings are documented, nor do I know if the setting has been backported to Trusty yet.
<kjackal> my problem is not with the proxy, it is with no_proxy we grab here: https://github.com/stub42/layer-snap/blob/master/reactive/snap.py#L112
<kjackal> I was thinking we could have a no_proxy config variable and not get the full list from hwatever juju model-config has
<stub> Yes. I'm just saying that I expect to stop creating systemd override files and start using snapd's proxy setting mechanism.
<stub> I'd like to avoid new config items, as it pollutes every snap built with the layer. Insanely large no_proxy lists is not a common problem, so I'd like a less intrusive fix.
<kjackal> I am trying to find this snapd proxy setting that goes through snap core but dont seem to be that successful..
<stub> It isn't documented at https://docs.ubuntu.com/core/en/reference/core-configuration , so it is either very new or yay documentation
<stub> probably a 2.30 thing, which is the version adam needs to force for the snap-proxy support
<kjackal> yes... but there is nothing for noproxy there
<stub> ok, might be the case. The bug requesting actual proxy settings is still open (I don't consider the systemd override approach a particularly stable workaround)
<kjackal> There is this: https://github.com/snapcore/snapd/pull/3594
<stub> kjackal: Would it be good enough for the snap layer to truncate the NO_PROXY environment variable before stuffing it in the systemd override file?
<kjackal> stub, I am not sure when/why the no_proxy is needed by snapd
<kjackal> yes, trucating it is good enough for me now
<stub> ok.
<kjackal> usualy in network restricted environments snapd does not need to reach other services internal
<stub> I'm not aware of anyone who needs no_proxy. I just pulled it from the environment with the other env variables because it seemed like a good idea at the time.
<stub> So truncate or don't propagate no_proxy at all.
<kjackal> I would prefer to remove no_rpoxy rather than having it half in snapd config
<stub> ok.
<kjackal> cool, even easier
<kjackal> doing it now, thanks
<stub> In other news, systemd's opinion of what makes a valid environment variable is different to the OS's opinion.
<kjackal> stub: true, especialy since it is doing this silent fail
<kjackal> "I will skip this config and pry noonw notices" :)
<pmatulis> i heard there are CentOS-based charms in the store but this comes back empty: https://jujucharms.com/q/centos - how do i find these charms?
<rick_h> pmatulis: https://api.jujucharms.com/charmstore/v5/search?series=centos7
<pmatulis> rick_h, thanks. so just one then? i actually tried this and it failed. some kind of image authorization problem
<pmatulis> https://pastebin.canonical.com/208767/
<rick_h> pmatulis: so yea, you have to sign a thing in amz to get the centos image that's used
<rick_h> pmatulis: it's something I did a long time ago tbh, and appears to only be one atm. We always wanted/hoped for me
<pmatulis> rick_h, i did a manual launch. i didn't see any signing option
<zeestrat> So, any tips for recovering an unresponsive juju environment that borked itself during upgrade-juju?  https://bugs.launchpad.net/juju/+bug/1746265
<mup> Bug #1746265: juju-upgrade from 2.2.9 to 2.3.2 fails with state changing too quickly <juju:New> <https://launchpad.net/bugs/1746265>
<bobeo> hey everyone! I hope your day is going well!
<bobeo> Is there a way to do: juju run --unit "select all units" "command" ; E.G. juju run --unit * ifconfig
<bobeo> is that possible tod o with juju?
<zeestrat> bobeo: do you want to target just units within an application or all units in a model?
<zeestrat> juju run has --application which does the first and also --all which should do the latter
<rick_h> wallyworld: do we have anyone that can look at zeestrat's issue there? I recall some chatter around that but I don't know the results/etc exactly
<wallyworld> rick_h: doing an interview atm
<rick_h> wallyworld: rgr
<zeestrat> Thanks rick_h. Much appreciated.
<wallyworld> zeestrat: rick_h: if you want all units of a given application, you can do juju run --aplication foo
<rick_h> zeestrat: I more meant his issue with the upgraded juju model during upgrade and bug #1746265
<mup> Bug #1746265: juju-upgrade from 2.2.9 to 2.3.2 fails with state changing too quickly <juju:New> <https://launchpad.net/bugs/1746265>
<wallyworld> but you want all units in a model? --all will do all machines
<rick_h> wallyworld: sorry ^
<wallyworld> rick_h: oh, i see. i don't have any clue off hand. i'll need to look into it
<rick_h> wallyworld: ok sorry, the error there rang a bell but I couldn't find what I thought I'd seen go by in conversations before.
#juju 2018-01-31
<gbc> anyone out there tonight? hoping to get help with a basic question ... been searching, but can't find a straight answer
<gbc> in general, can you deploy multiple charms on the same server without containers?
<gbc> more specifically, can you deploy an openstack all-in-one type system without containers, using juju charms?
<pmatulis> gbc, yes, using the '--to' option of the 'juju deploy' command
<pmatulis> https://jujucharms.com/docs/stable/charms-deploying#deploying-to-specific-machines-and-containers
<gbc> pmatulis, yes, but are the openstack charms designed in such a way that they can co-reside on the same server?
<gbc> 'cause when I do this, it seems like they're conflicting with each other
<pmatulis> oh, that i do not know
<pmatulis> the charms should be documented
<gbc> yeah, you'd think so
<gbc> the charms don't say that it *can't* be done, so I assumed that means that it can be done ...
<pmatulis> gbc, have you tried conjure-up? i believe that's what it does. puts everything on one system. but it uses LXD
<pmatulis> (i'm quite sure, haven't tried in a long time)
<gbc> i'm trying to avoid using containers (reasons) ...
<gbc> https://askubuntu.com/questions/506647/juju-and-openstack-provisioning
<gbc> that link suggests that most juju itself doesn't care if charms co-exist on the same server, but it's left up to charm developers ...
<gbc> also says most charms "assume they own the whole machine" ...
<gbc> and every example I see uses containers ...
<gbc> so i'm putting 2&2 together and thinking that the openstack charms don't combine well on the same server ...
<gbc> and that it's just not explicitly stated
<gbc> anyway, thanks for the response
<pmatulis> np, good luck
<gbc> for the record (especially in case anyone comes across this while searching in the future) ...
<gbc> here's another link that says "Using --to flag without containerization is a really bad idea...Basically you're layering a ton of services on top of each other that all expect to own the machine."
<gbc> https://askubuntu.com/questions/459992/hook-failed-shared-db-relation-changed-when-using-openstack-in-the-same-syste
<BlackDex> Hello there. I'm having some issues with LXD/LXC Containers when using juju (with MAAS). The problem is that the containers (lxc on 14.04 and lxd on 16.04) arn't getting there DHCP address because UFW is enabled and blocking the DHCP Request from passing to the MAAS Node. It blocks these requests on the Container Host. This in turn prevents juju from configuring the container. Is this a know problem? As i
<BlackDex> can't find a bug/issue about this. If i disable UFW on the Container Host it seems to work, but UFW seems to get enabled sometimes even if i disabled it using `sudo ufw disable`
<Coompiax> Hi Guys. Please could someone help us? We are deploying Openstack using the JuJu charm bundle "openstack-base". We have 2 major subnets including: 10.0.0.0/24 for MAAS and 192.168.0.0/16 for our private cloud/local lan/public facing services. When deploying, we get "no obvious space for x/lxd/y, host has spaces "cloud", "maas". So we need to deploy these charms to the correct spaces.
<Coompiax> When we use the cli using: juju deploy --to lxd:0 --constraints spaces=cloud, the lxd container gets created and all is well. But, what about the bindings? Especially when we get to the openstack bundle which has a bunch of applications, each with its own set of bindings.
<Coompiax> So basically, we are struggling to get to grips on how to allocate applications/units/containers to the correct spaces.
<Coompiax> Any help would be greatly appreciated. We can simplify this by just using haproxy as an example.
<Coompiax> Hello?
<Coompiax> Anyone?
<Cooooompiax> .
<petevg> Darn. Coompiax left. I was going to point them at https://jujucharms.com/docs/2.3/charms-bundles#binding-endpoints-of-applications-within-a-bundle
<petevg> On the command line, it's the --bind param, formatted like --bind "somebindname=somespacename otherbindname=otherspacename"
<bobeo> df
<bobeo> whats the easiest way for me to move juju instances and containers from one machine to another/ I just realized im wasting a lot of resources on a system that I could better utilize
<zeestrat> bobeo: easiest would be to add units to a new machine and remove units from the old if the charms support that.
<el_tigro1> I set up an aws controller some time ago and have a couple of models running. Haven't touched anything in a while. Today I decided to make some changes but my aws credentials had changed. So I used `juju-autoload-credentials` and then `juju update-credential aws default`. Everything went smoothly. To warm myself back up to juju, I just did a simple `juju add-machine` but it's been stuck in the "pending" state for about an hour. So I
<el_tigro1> connected to the controller with `juju ssh -m admin/controller 0` to check out the logs in '/var/log/juju/machine-0.log'. I see the same error message every 3 seconds:
<el_tigro1> ERROR juju.worker.dependency engine.go:546 "compute-provisioner" manifold worker returned unexpected error: failed to process updated machines: failed to get all instances from broker: AWS was not able to validate the provided access credentials (AuthFailure)
<el_tigro1> First thing is that I'm sure the credentials are correct but I figure I'll use `juju remove-machine --force <machine-number` to stop the spamming. The returns success with "removing machine 2" however `juju status` still shows it in the pending state. And the controller is still see the same error message in the controller logs every 3 seconds.
<cory_fu> el_tigro1: I'm pretty sure that loading new credentials only applies to new controllers or models.  The existing controller is going to continue to try to use the old credentials.  I do not know if there's a way to tell it to use new ones
<el_tigro1> first question is how can I force juju to cancel the operation. BTW I'm running juju version 2.3.2-xenial-amd64
<el_tigro1> cory_fu: I thought that's what `juju update-credential` is for
<cory_fu> el_tigro1: You seem to be right.  I haven't used that before.
<cory_fu> el_tigro1: As for cancelling, you could try just doing a `juju remove-machine --force <num>`
<cory_fu> Or possibly --keep-instance if that fails
<el_tigro1> cory_fu: Thanks but I tried that already. See my 4th paragraph :)
<cory_fu> el_tigro1: Yeah, sorry I missed that.  What about --keep-instance?
<el_tigro1> cory_fu: just tried it. No luck
<el_tigro1> Still getting the same error message every 3s !
<cory_fu> balloons: Any suggestions?  ^
<cory_fu> el_tigro1: Only other thing I can think of would be to try restarting the jujud process on the controller, but that seems a bit heavy-handed
<cory_fu> And if it didn't get the updated credential in the right place, it probalby wouldn't help
<el_tigro1> cory_fu: Thanks for the suggestion. I guess I could give it a shot
<el_tigro1> cory_fu: I ran `sudo systemctl restart jujud-machine-0.service` and the pending machines have vanished from the model. Thanks!!
<cory_fu> Oh, nice.  I wonder if that will help it pick up the new credential as well?
<el_tigro1> About to try it out :D
<cory_fu> el_tigro1: It would probably be worth filing a bug for that.
<el_tigro1> cory_fu: success!
<cory_fu> That's great.  I wonder why it got into a bad state
<cory_fu> Maybe it was a backlog of failed operations that was blocking resolution?
<el_tigro1> so I guess the sequence of commands to update credentials on the controller requires: `juju-autoload-credentials` and `juju update-credential aws default` on the client. And then 'sudo systemctl restart jujud-machine-0.service' on the controller
<el_tigro1> workaround
<el_tigro1> cory_fu: I wonder as well
<bobeo> o/
<bobeo> whats the constraint option for storage space? Is it disk?
<bobeo> juju deploy lxd --constraints disk=400G ?
<bobeo> I want to be able to deploy an lxd system to build web applications on, but it just gave me an error, says "cannot use --constraints on a subordinate machine"
<bobeo> application*
<rick_h> beisner: so the constraints on disk is that you need a root disk of 400G from the underlying cloud or find me a machine in maas that has a disk of 400G
<rick_h> sorry, bobeo ^
<rick_h> bobeo: and normally you can't use the constraints on something that's a subordinate of another charm because it's the main charm the machine comes up to match. The subordinate is installed afterwards as a kind of "add-on" chunk of software
#juju 2018-02-01
<kwmonroe> hey ybaumy, if you're still interested in k8s-elastic, the ES charms have been refreshed with xenial versions, and that bundle has been released to --edge: https://jujucharms.com/canonical-kubernetes-elastic/98
<kwmonroe> if you want to give it a whirl, deploy the edge version of the bundle like this:  juju deploy ~containers/bundle/canonical-kubernetes-elastic --channel edge
<elmaciej> Hi! Does anyone know when we can expect the kubernetes as cloud provider in JuJu ? we need to deploy apps on existing kubernetes not created by juju.
<bobeo> o/
<bobeo> having issues removing an application that failed to install. Whta are my options?
<bobeo> I tried juju remove-application owncloud , as well as juju remove-unit owncloud/3
<bobeo> both said "removing" but neither removed the application, its still showing as cauht in failed to install
<pmatulis> bobeo, do you need other stuff in the model? is removing the model an option?
<zeestrat> bobeo: you can also try juju resolved --no-retry owncloud/3 which resolves the error, then juju might be able to remove it
<kwmonroe> bobeo: as zeestrat mentioned, 'juju resolved --no-retry owncloud/x' is probably the best option to get to a state where remove-application will finally take it out.  this will get better when https://bugs.launchpad.net/juju/+bug/1741506 is addressed.
<mup> Bug #1741506: Removing a unit or application doesn't resolve errors <cli> <usability> <juju:Triaged> <https://launchpad.net/bugs/1741506>
<magicaltrout> hello lovely people
<magicaltrout> i have a node that is up but juju status says is down
<magicaltrout> how do i kick the probe without restarting the server?
<thumper> magicaltrout: it is possible that the agent isn't communicating
<thumper> magicaltrout: which version?
<magicaltrout> 2.3.1
<thumper> hmm...
<thumper> which node?
<magicaltrout> which node?
<magicaltrout> what is the correct answer to such a question?
<magicaltrout> for example
<magicaltrout> kubernetes-worker/17      unknown   lost    19
<magicaltrout> is what I see
<magicaltrout> but juju ssh 19 works
<neclimdul> having trouble with graylog. digging into the logs its complaining about  "TypeError: blocked_mongo() takes 0 positional arguments but 1 was given"
<neclimdul> not sure where to go to start debuging and report it so i'm here
<arosales> Is https://jujucharms.com/kubernetes-core/ still the smallest k8s recommended bundle or do folks have other suggestions for a minimal k8s structure?
<arosales> s/structure/cluster
#juju 2018-02-02
<bdx> magicaltrout: just kick the juju machine agent process on that node
<bdx> `sudo service jujud-machine-<#> restart` ?
<kwmonroe> neclimdul: that's def a bug: https://git.launchpad.net/graylog-charm/tree/reactive/graylog.py#n81
<kwmonroe> that handler needs to be "def blocked_mongo(elasticsearch):"
<kwmonroe> arosales!!  it's 2 friggin machines!  how much smaller do you wanna go??
<kwmonroe> (hey btw ;)
<arosales> kwmonroe: howdy as you Texans say
<kwmonroe> i'm doing whatever they say in philly through this weekend.  hook 'em eagles!
<arosales> kwmonroe: I was mainly confirming k8-core was still the _recommended_ minimal
<arosales> I am a fan of anyone playing against the Patriots
<kwmonroe> yup arosales, k8s-core is it.
<arosales> cool, thanks kwmonroe
<arosales> enjoy Philly
<arosales> skip the liberty Bell and visit some real history at the Rocky's bronze statue
<kwmonroe> oh i'm not *in* philly.  i'm just talking like a philadelphian until the superbowl is over.
<kwmonroe> so instead of "howdy", i'd say something like "cheese steak".
<jhebden> neclimdul: make sure you're on the latest release. That bug was fixed previously.
<bobeo> sadf
<bobeo> o/
<bobeo> having issues getting owncloud to install. so far ive tried several instances of owncloud deployment, and so far no dice. I keep getting a "hook failed: install" error. Is there a waay I can debug this issue?
<magicaltrout> do juju debug-log for a start bobeo
<magicaltrout> you'll get some error messages hopefully
<bobeo> magicaltrout: ok so I see the log, btw, this is a godsend <3, I need to identify what it means by "apt key verifcation failed". Also, why is a canonical package repo pulling from opensuse?
<bobeo> magicaltrout: also, it looks liek that even after that verification fails, it continues to download anyways. Does that mean its still working? I figured if it pushed an exit status 1, that it would fail outright?
<magicaltrout> bobeo: i guess the repo updated their gpg keys or something, you'll have to file a bug https://bugs.launchpad.net/charms/+source/owncloud
<magicaltrout> i'm not sure who owns that charm though, you could prod rick_h and see if he knows
<rick_h> not sure who the owncloud folks are tbh
<rick_h> been around a while though but not sure they've been vocal in irc/etc
<kwmonroe> bobeo: looks like a new key was uploaded to the owncloud repo back in july of last year: http://download.opensuse.org/repositories/isv:/ownCloud:/community/xUbuntu_14.04/
<kwmonroe> (btw, that's just opensuse hosting the project -- don't read too much into that, as it doesn't mean opensuse *packages* are trying to be installed)
<kwmonroe> bobeo: but the charm has a specific key sum hard coded, see around line 40: https://api.jujucharms.com/charmstore/v5/owncloud/archive/hooks/install
<kwmonroe> so i'm guessing that simply isn't the write shasum to verify the new key
<kwmonroe> that said, there's a bug referencing an entirely new repo for more recent versions of OC:  https://bugs.launchpad.net/charms/+source/owncloud/+bug/1518009
<mup> Bug #1518009: New official repository starting from 8.2 <owncloud (Juju Charms Collection):New> <https://launchpad.net/bugs/1518009>
<kwmonroe> bobeo: but it doesn't seem like that charm is actively maintained -- no replies to those bugs in quite some time. like magicaltrout said, you could try to raise a new bug, or reach out directly to the maintainers listed at the top of the readme to see what's up with their charm.
<magicaltrout> i saw your Bigtop PMC vote go through my inbox a couple of months ago kwmonroe, i forgot to say congrats when it got signed off, congrats!
<ejat> <ejat> hi ..
<ejat> <ejat> i used openstack-telemetry charms .. 2 services got problem (mongodb & neutron-gateway)  .. juju status output : http://paste.ubuntu.com/26506527/
<ejat> <ejat> mongodb log output: https://paste.ubuntu.com/26506520/
<ejat> <ejat> neutron-gateway: https://paste.ubuntu.com/26506512/
<kwmonroe> thanks magicaltrout!
<kwmonroe> it was a nice pay bump
<rick_h> oooh, kwmonroe is celebrating today?
<magicaltrout> ha, na i'm just slow to the party
<kwmonroe> went from $0 to a very large value of 0.
<rick_h> magicaltrout: lol, ok I had to go look and see it's Nov :P
<rick_h> kwmonroe: but I didn't know so I'll be the last congrats to you :P
<kwmonroe> heh, thx rick_h :)
<ejat> how  to disable ipv6 in the charm?
<kwmonroe> ejat: i don't think many (any?) charms let you disable host networking config.  they could, i suppose, by doing something with /etc/sysctl.conf, but i can't think of any that do.
<kwmonroe> ejat: why do you want to disable it?
<kwmonroe> also ejat, i didn't see any errors from mongo in your status or paste above.
<kwmonroe> the neutron-gateway seems to be trying to add an unknown interface (eno2) to the br-ex bridge.
<kwmonroe> that stuff looksl ike it's coming from the openstack charmhelpers code.  maybe beisner can point you to a troubleshooter?  (line 5416 here https://paste.ubuntu.com/26506512/)
<beisner> seems like faulty config being fed into the charm.   âjuju config neutron-gatewayâ  output will be helpful to see.
<bobeo> kwmonroe: magicaltrout is it possible for us to take over that charm? Or would we need to build another charm for it? we use it heavily as part of our project, so we dont mind mantaining it as a way of contributing back
<kwmonroe> bobeo: you bet you can take it over!  i took a look for other owncloud charms and found that the illustrious bdx has a better version: https://github.com/jamesbeedy/layer-owncloud
<kwmonroe> bobeo: if you are going to go with a hostile takeover, i'd recommend starting from there ^^
<kwmonroe> note that sucker unfortunately doesn't deploy out-of-the-box because of some new packaging in v10, but i did just verify it works if you set the repo to deploy v9.  here are some more deats on that: https://github.com/jamesbeedy/layer-owncloud/issues/1
<bobeo> kwmonroe: I actually tried that one, and that was the one we were using before, it didnt work either. but yea, a hostile takeover would be nice, but can we do a "beerstiletakeover" instead? I mean, its bdx. Who doesnt love bdx?
<bobeo> kwmonroe: oooh! that explains a lot!
<bobeo> kwmonroe: Whats the best way to modify the config.yaml? As I understand it, it doesnt pull that until you execute the deploy command.
<bobeo> kwmonroe: from what I understand, id have to start with: juju deploy cs:~jamesbeedy/owncloud-4 ; and then where woudl I go to modify the config.yaml file, and I assume to use sed as a method to swap to the v9?
<kwmonroe> easy breezy bobeo... juju deploy with a --config yaml.  like this:
<kwmonroe> $ cat test.yaml
<kwmonroe> owncloud:
<kwmonroe>   install_sources: |
<kwmonroe>     - "deb http://download.owncloud.org/download/repositories/9.0/Ubuntu_16.04/ /"
<kwmonroe> $ juju deploy ~jamesbeedy/owncloud --config test.yaml
<bobeo> kwmonroe: 8O
<bobeo> kwmonroe: For surely, as he hath shown me the light, I see for the first time.
<kwmonroe> ha!
<kwmonroe> i do what i can for beerstiletakeoverers
<bobeo> kwmonroe: Ok, so to the next level, id like to do this for several packages, minecraft, (I know, I cant help it), owncloud, and gitlab for now, with pubshot, and several others in the near future. Also, I would love to know, is there any real difference between doing it with xenial, or precise for instance? If so, I wouldnt mind working to push several other packages from older versions like precise and trusty to xenial.
<rick_h> bobeo: the big thing that hit most software going from precise to xenial and such is that there was a move from the upstart init system to systemd. So those init scripts/etc need to be handled.
<bobeo> kwmonroe: feed a bobeo a fish, turn him into Gollum, teach a bobeo to fish, turn him into Lord Bobeo, Conqueror of Depricated Packages
<kwmonroe> nah bobeo, don't let rick_h scare you.  xenial is pretty much just precise with 4 years of extra linux stuff on top.
<kwmonroe> on the real though, huge +1 for refreshing any precise/trusty charms for xenial.  for gitlab, you might want to check out magicaltrout's (way better) rev:  https://jujucharms.com/u/spiculecharms/gitlab-server/8
<kwmonroe> in fact, magicaltrout, do you think we should replace https://jujucharms.com/gitlab/ with your gitlab-server?
<kwmonroe> rick_h: you still logging tips-n-tricks?  if so, i've got 3 that have come up recently:
<kwmonroe> - if 'juju remove-[application|unit]' fails, 'juju resolved --no-retry' until it works
<kwmonroe> - dont want to 'show-action-output' later, use 'juju run-action --wait' instead
<kwmonroe> - need to change default charm config, use 'juju deploy foo --config' with a custom yaml
<magicaltrout> you know your kids control the music too much when your "Top Tracks of 2017" on spotify includes 2 frozen tracks.....
<magicaltrout> I don't mind kwmonroe, i still need to get around to exposing more of the fun stuff, but it does the job at the moment
<kwmonroe> you know you're getting old when "alexa, play my programming staion on pandora" plays friggin NPR News!  i almost threw her across the room.
<magicaltrout> lol
<kwmonroe> then i thought, maybe it's not alexa, maybe it's pandora... and then i thought, oh no, maybe it's me!!
<zeestrat> Speaking of refreshing old charms, rick_h, any news on getting some community curation tools in the store? It would help folks find which charms are preferred and which need some love and care.
#juju 2018-02-03
<raghav> hi
<raghav> hi all
<raghav> i am trying to bootstrap juju
<raghav> but its failing
<raghav> getting error as
<raghav> ERROR failed to bootstrap environment: subprocess encountered error code 130
<raghav> can anyone help for this...whether some additional steps are required to be done?
<raghav> i used below command to install juju
<raghav> snap install juju
<raghav> and to bootstrap....juju bootstrap
<raghav> *snap install juju --classic
<raghav> can anyone help
<demo> what?
<kwmonroe> hey bobeo, fyi, bdx did some great work in owncloud in the last day.  it supports v10, an external db (psql) and includes a vhost template to make the pesky apache config work.  check it out: https://jujucharms.com/u/omnivector/owncloud/1
<magicaltrout> what a geek
#juju 2020-01-27
<livcd> What is the role of Juju in the age of Kubernetes?
<manadart> livcd: Using Juju, one can deploy and scale Kubernetes itself. Such a deployment can then be used as a substrate by Juju to model and manage solutions on top of Kubernetes.
<manadart> Deploying K8s with Juju: https://jaas.ai/canonical-kubernetes
<manadart> It is also easy to play with Juju/K8s using microk8s.
<livcd> hmmm
<livcd> So Juju would be a platform to deploy the k8, then manage the apps that sit on K8
<manadart> livcd: Yes.
<livcd> I need to watch some videos to understand this better
<skatsaounis> hi any ideas? cannot read "/var/lib/juju/agents/unit-xxx-2/state/uniter": invalid operation state: "relation-changed" hook has a remote unit but no application
<skatsaounis> can I manually rewrite the uniter? If yes what keyvalue should I add?
<gnuoy> hi there, does anyone know what changed around cross model relations between juju 2.7.0 and 2.7.1 ?
<gnuoy> I have a test which was passing and no longer is.
<gnuoy> It looks like the cmr hook isn't firing but digging into it in a bit more detail now
<stickupkid> gnuoy, there are CMR changes, but it should only be landing in 2.8 (develop), let me know if you find anything
<gnuoy> stickupkid, is https://paste.ubuntu.com/p/2kJ75VJcmh/ normal ?
<gnuoy> stickupkid, oh, sorry
<gnuoy> All the CMR relations are in a 'terminated' state
<gnuoy> stickupkid, I've raised https://bugs.launchpad.net/juju/+bug/1860992
<mup> Bug #1860992: CMR relation goes into terminated state on deploy <juju:New> <https://launchpad.net/bugs/1860992>
<stickupkid> gnuoy, I'll just have a quick look now to confirm
<gnuoy> thanks
<stickupkid> manadart, or anybody know when we updated lxd stuff...
<stickupkid> I'm getting juju.provider.lxd failed to get instances from LXD: Get https://172.17.0.1:8443/1.0/containers?recursion=1: Unable to connect to: 172.17.0.1:8443
<stickupkid> the url is available to access
<stickupkid> gnuoy, looks confirmed to me, I'll see why we're getting an error in remote application worker for keystone
<gnuoy> stickupkid, great, thank you
<stickupkid> gnuoy, is there any chance you can test with edge?
<gnuoy> stickupkid, sure
<gnuoy> I'll do it now and update the bug
<stickupkid> gnuoy, fyi: that's the 2.8 release and thanks
<gnuoy> stickupkid, 2.8 seems to have the issue too
<stickupkid> gnuoy, if you need a work around for now, you can remove the saas from swift-sf and then consume the offer again.
<gnuoy> stickupkid, ack, thanks
<timClicks> some interesting threads in discourse have popped up over the last few days
<timClicks> https://discourse.jujucharms.com/
#juju 2020-01-28
<nammn_de> manadart: I was trying to use your newly added deltaops for controllersettings (juju-ha-space ..) and realized those are in their own collection. With this, I changed the `deltaops` function. Does that make sense? https://github.com/nammn/juju/blob/48c527fba46e0c5057e1feda909ba635c957449c/state/settings.go#L536  Or did I overlook something and the
<nammn_de> `settings` collection is still the right collection for this?
<manadart> nammn_de: Sorry, I meant to reply to your question. You want something more like this: https://pastebin.canonical.com/p/H5mmbX6WKM/
<nammn_de> manadart: ahhh makes sense. Thanksss
<stickupkid> manadart, looking at the integration test - I thought we fixed the issue with missing relation scope?
<stickupkid> manadart, https://paste.ubuntu.com/p/BbZfwgqtdT/
<stickupkid> also "yeah boi" - I'm laughing so hard - haha
<manadart> stickupkid: Sorry, was just OTP. Is that with develop?
<stickupkid> manadart, well your branch
<manadart> stickupkid: You updated the dep and installed it?
<stickupkid> make install'ed it
<manadart> So was that migrating the offering model, or the consuming one?
<stickupkid> offering
<stickupkid> can ho once you're free manadart
<manadart> stickupkid: Daily
<nammn_de> manadart: To change the spaces for constraints I see the following options: get constraints for each type (applications, machine, units..) and change these OR just get all constraints as collection and edit these. What do you think makes the most sense? Im thinking about the latter because of implementation ease
<nammn_de> manadart: hmm but if I think about it, this could potentially be a lot of docs constraints to at least check and then change..
<nammn_de> ahhh no it might make the most sense to just query for the space and then to change that subset =D
<manadart> nammn_de: Yes.
<achilleasa> manadart: populating modelUUID/DocID did the trick!
<manadart> achilleasa: Nice.
<nammn_de> manadart: another thing.  I realized because I query  the constraints collection itself and need to write them back again I need the "_id". As I don't know whether the doc is a machine, application ect.
<nammn_de> I see the following options:
<nammn_de> 1: add _id to constraintsDoc <-- I would prefer this
<nammn_de> 2: use own constraintsDoc struct . What do you think? Or another suggestion in mind?
<manadart> nammn_de: Just add `DocID string `bson:"_id"` like we do elsewhere.
<nammn_de> manadart: great, will do
<achilleasa> anyone up for a CR for https://github.com/juju/juju/pull/11156 ?
<manadart> achilleasa: I can look.
<stickupkid> manadart, if you have a relation to another CMR model, if you try and destroy the controller, it fails, is this something you ran into?
<manadart> stickupkid: Yes, this is known and will not be changed (there is an LP bug). That is why the migration integration tests force-destroy the offer before teardown.
<stickupkid> manadart, this is annoying as we can't tear down a controller if an integration test fails
<stickupkid> manadart, i.e. the integration test doesn't know about the relation
<manadart> stickupkid: Ah, yes.
<stickupkid> manadart, this is why the integration jujus are hanging around, even if you told them to be destroyed
<manadart> stickupkid: I wonder if the bash-based suites would allow us to register extra tear-down...
<stickupkid> manadart, i'm unsure
<stickupkid> manadart, defer but in shell
<stickupkid> haha
<manadart> stickupkid: Exactly :)
<stickupkid> all i want is closures
<stickupkid> manadart, i wonder if we can ask for all relations?
<stickupkid> manadart, and just kill them all off
<manadart> stickupkid: Could work I guess.
<stickupkid> manadart, got it to work with the check, but need to work on the destroying issue
<stickupkid> i'll update your pr
<stickupkid> https://github.com/juju/juju/pull/11155
<stickupkid> manadart, got it working, not the defer thing, but cleaning up the offers
<stickupkid> get in there
<stickupkid> CR https://github.com/juju/juju/pull/11157
#juju 2020-01-29
<wallyworld> kelvinliu: you can adapt this https://pastebin.ubuntu.com/p/yyxJnT54nj/
<kelvinliu> wallyworld: ok, thanks
<wallyworld> loop until call count is reached
<wallyworld> you can call c.Fail() if needed etc
<kelvinliu> yep
<babbageclunk> ugh, has anyone had a situation where goimports keeps adding an import for logger even though there's a logger defined in a different file in the package?
<wallyworld> sometimes, pita
<babbageclunk> it's bloody weird!
<hpidcock> babbageclunk: why don't you just forget goimports exists? you were in ignorant bliss for so long
<babbageclunk> I would but it's great except when it doesn't work right.
<hpidcock> babbageclunk: are you using gopls yet?
<babbageclunk> hpidcock: stockholm syndrome basically
<babbageclunk> hpidcock: nope not yet
<babbageclunk> hpidcock: is it good? Or is it... bad?
<hpidcock> babbageclunk: a bit of both
<babbageclunk> like everything then
<hpidcock> babbageclunk: mostly good
<babbageclunk> hmm, might try it sometime
<wallyworld> kelvinliu: is the vsphere PR good to land?
<kelvinliu> wallyworld: nah, Im refactoring one of the failed suite to use gomock
<wallyworld> no worries
<wallyworld> kelvinliu: did they fix your phone?
<kelvinliu> yes, they gave me a new phone and it's free. : )
<wallyworld> wow, and so they should have
<wallyworld> they broke it
<kelvinliu> yeah, i think they know the guy didn't fix the screen last time.
<kelvinliu> correctly
<wallyworld> i bet he's in trouble
<kelvinliu> im fixing mfa
<kelvinliu> hope he won't be in trouble too much
<wallyworld> who knows with Apple
<nammn_de> manadart achilleasa stickupkid: let's say I add an DocID string `bson: _id`  field.  How can I say that I only want to set the constraints for everything of that struct except that DocID? Such that I don't come into the immutable `_id` error? Or do I have to set that for each field separately ([]bson.DocElem..) ?
<stickupkid> nammn_de, for selection or inserting?
<nammn_de> stickupkid: oh sry, forgott the e.g. link https://github.com/nammn/juju/blob/bfa6bd8e2d2b664381c1e833a9fd2f5b7bf94953/state/constraints.go#L92
<stickupkid> nammn_de, if it's inserting, then make a new type without a doc id ?
<nammn_de> stickupkid: okay, that was my other thought.
<nammn_de> stickupkid: thought maybe there is something easier :/
<stickupkid> it's juju, nothing is easy :wink:
<nammn_de> stickupkid: haha okay, thanks!
<stickupkid> manadart, updated to include forcing, I think it's a good shout
<manadart> stickupkid: Yep.
<stickupkid> manadart, https://github.com/juju/juju/pull/11155 <- I got this working - you think we should move of from draft?
<manadart> stickupkid: Yep.
<nammn_de> manadart: rename-space now is review ready.  QA steps works well for me https://github.com/juju/juju/pull/11143
<manadart> nammn_de: OK.
<achilleasa> Can I get a CR on https://github.com/juju/juju/pull/11158?
<stickupkid> achilleasa, looking now
<achilleasa> stickupkid: Do I need to add empty stubs for older API versions?
<stickupkid> offically yes
<stickupkid> but actually it's ok, but yeah you should
<achilleasa> stickupkid: ok, I will add them together with the missing comments
<achilleasa> stickupkid: pushed a commit. Is that ok or did I forget something?
<stickupkid> achilleasa, looks good
<manadart> Upgrade step for the systemd unit file relocation: https://github.com/juju/juju/pull/11159.
<nammn_de> manadart: I still need to fix the unit tests and do some cleaning. Just wanted to know beforehand whether you meant the modeloperation like this: https://github.com/juju/juju/pull/11143
<stickupkid> achilleasa, congrats on the book :)
<achilleasa> stickupkid: thanks :-)
<nammn_de> achilleasa: yeah congrats!
<rick_h> manadart:  doh sorry, never hit submit on assigning you that bug. just hit submit now
#juju 2020-01-30
<wallyworld> tlm: i have a couple of questions about the watcher PR, you free for a HO?
<tlm> wallyworld: yep
<wallyworld> looks like hpidcock raised one of my issues already
<tlm> does hpidcock want to join ?
<wallyworld> let's jump standup
<hpidcock> kk
<babbageclunk> wallyworld: when you get a moment can you take a look at https://github.com/juju/juju/pull/11160? It's the CMR offer name fix
<wallyworld> babbageclunk: yeah, was reviewing then had to go to cataylst meeting
<wallyworld> will finish when meeting is done
<babbageclunk> wallyworld: oh right - thanks!
<wallyworld> babbageclunk: i'm having trouble groking names.NewApplicationTag(offerName), maybe we can discuss. i just need to be educated
<babbageclunk> wallyworld: it's the other side of this line: https://github.com/juju/juju/blob/develop/apiserver/facades/controller/crossmodelrelations/crossmodelrelations.go#L311
<wallyworld> babbageclunk: just witj john in leads call, free free to jump in
<babbageclunk> wallyworld: it's definitely weird though - and caused me a bit of trouble (since it makes the tags coming back from looking up remote entities ambiguous
<babbageclunk> wallyworld: wilco
<babbageclunk> )
<wallyworld> tlm: i left a nitpic plus a cleanup suggestion
<wallyworld> hpidcock: was https://bugs.launchpad.net/juju/+bug/1855777 fixed do you recall?
<mup> Bug #1855777: Seg fault generated due to malformed bundle.yaml file <bundles> <ux> <juju:Triaged by hpidcock> <https://launchpad.net/bugs/1855777>
<wallyworld> there was another bundle fix that got done
<wallyworld> we may have done both
<hpidcock> wallyworld: I'll verify, but I believe so
<wallyworld> ta, just doing gardening for 2.7.2 RC
<hpidcock> when did 2.7.1 go out??
<wallyworld> last week at sprint
<hpidcock> ok
<wallyworld> it's ok if we mark against 2.7.2
<wallyworld> so long as it goes fix committed
<wallyworld> ty
<hpidcock> yeah they both were done atm tlm's induction
<hpidcock> sorry, missed the part about marking as committed
<stickupkid> manadart, I'm going to try and create a PR for CMR in 2.7... this will be fun
<manadart> stickupkid: Yay. Have you looked at babbageclunk's CMR patch? I haven't checked if that will get in our way yet.
<stickupkid> manadart, not at all, will check now
<stickupkid> manadart, I like that he's have his own conversation in this comment https://github.com/juju/juju/pull/11160/files#diff-cf3dd42d097ceeee567e5127f9a1e1abR190-R196
<stickupkid> manadart, one thing we've not done is removed the restriction + feature flag
<manadart> stickupkid: I think we have sufficient confidence and test coverage to do that.
<stickupkid> I'll do it after the merge
<stickupkid> manadart, if I die and so I don't have to repeat this process - I believe these are the CMR commits - going to re-check now https://paste.ubuntu.com/p/rDZ8TN99jS/
<nammn_de> manadart: in for another cr? https://github.com/juju/juju/pull/11143 I reworked the code to use modeloperation
<manadart> nammn_de: Few items remaining from my last review.
<nammn_de> manadart: oh, really? Sorry. Gonna fix them, thought I resolved them
<nammn_de> manadart: ahh damn my ide refactor refactored more places than i wanted
<nammn_de> dammnit
<nammn_de> ahhhh I just realized that you reviewed before I asked and buff got confused =D
<nammn_de> manadart: regarding 2 of your comments:
<nammn_de> - doesnt need to that interface if it doesnt implement. But didnt it? But I think it didn't had any usage so I just removed it and the function using it now uses state.ModelOperation instead
<nammn_de> - A *state.State supplies it directly.: the api doesn't use a *state.State in its struct. Should I add one? I thought we didn't want that
<nammn_de> manadart: I added some comments/questions to those PR comments from you. Thanks!
<manadart> nammn_de: Replied to your comment.
<nammn_de> manadart:  maybe I am wrong, but deleting it stills fails compilation for me:  apiserver/facades/client/spaces/spaces_rename.go:235:23: api.backing.ApplyOperation undefined (type Backing has no field or method ApplyOperation)
<manadart> nammn_de: Applying this to your branch builds fine. https://pastebin.canonical.com/p/rMH88bWKYj/
<nammn_de> manadart: damnit stupid me, thanks! I removed from the interface and from the shim
<nammn_de> manadart: pushed the  changes and applied your review.
<achilleasa> manadart: so, the HookContext uses an interface and gomock but the NewHookContext in export_test uses concrete instances :D
<achilleasa> I might be able to cheat and get my tests in without messing with embedding uniter.Unit
<achilleasa> manadart: doh! hml introduced a mockHookContextSuite which I totally missed (other file). I will add my tests there :-)
<manadart> achilleasa: Standup.
<achilleasa> hml: can you please review https://github.com/juju/juju/pull/11164?
<hml> achilleasa:  sure
<hml> achilleasa:  approved.
<achilleasa> hml: tyvm
<stickupkid> manadart, I think I solved the CMR git diff issue, i didn't start far back enough
<manadart> stickupkid: Yeah, I think there were some prep commits quite a while ago.
<achilleasa> hml: did you see my reply to your comment? Should I go ahead and merge?
<hml> achilleasa:  not yet, looking
<hml> achilleasa:  agreed,  i figured a small chance, worth noting, but not worth a different approach.
<hml> merge away
<nammn_de> manadart: time for a call later today or tomorrow to spec out remove-space/if needed rename-space, a little?
<manadart> nammn_de: Sure. I have to head home now, but ping me when you are on tomorrow.
<nammn_de> manadart: sure will do =D. Nice evening!
<stickupkid> I know what's wrong, I'm missing a PR
<rick_h> stickupkid:  doh! easy fix?
<stickupkid> starting again, it's quite far back :(
<rick_h>  stickupkid doh
<stickupkid> rick_h, this is the new list https://paste.ubuntu.com/p/2GKNYPW5tm/
 * rick_h is afraid to click
<stickupkid> it's ok till about 1/2 way in and then explodes
<stickupkid> one last go, before I try again tomorrow
<rick_h> oh geeze...
<stickupkid> the fixes aren't hard, but I'm obviously missing a PR somewhere
<stickupkid> brb grabbing a drink
<stickupkid> rick_h, it's babbageclunk changes are causing a bit of an issue - this was to be expected, but I've managed to get further now than i have done, so we're close
<stickupkid> rick_h, done it!
<stickupkid> rick_h, nice healthy set of changes +8,096 â1,523
<rick_h> stickupkid:  woot woot
<rick_h> stickupkid:  congrats!
<stickupkid> there are some issues with dependencies, but I'll solve them tomorrow, seems the gopkgs.toml file isn't in sync
<rick_h> stickupkid:  cool yea and guild please help stickupkid with any testing/QA so we can make sure we're feeling good about it in 2.7.2
<rick_h> stickupkid:  maybe reach out to babbageclunk to help as well in his TZ since he's in there a little bit
<rick_h> #moar-eyes-better
<babbageclunk> rick_h: what's the problem?
<babbageclunk> rick_h: ah - has there been a decision to release cmr migration in 2.7?
<babbageclunk> that 8k line PR looks fun
<rick_h> babbageclunk:  it's a backport of the cmr migration bits to 2.7 yea
<babbageclunk> ouch
<wallyworld> rick_h: backporting a rather large feature to 2.7 seems risky?
<rick_h> wallyworld:  we're hanging out in the team chat we can discuss it if you're up for it
<addyess> i'd apreciate a review of charm-helpers PR: https://github.com/juju/charm-helpers/pull/423
<addyess> thanks timClicks
<techalchemy> addyess, thanks for putting that ll together it looks good to me fwiw
#juju 2020-01-31
<timClicks> addyess, techalchemy: agreed - there's lots of useful refactoring in there
<techalchemy> timClicks, most significatly adding compatibility is important, I'm installing a fork so i can use python 3 which is a bit annoying
<timClicks> whelp
<timClicks> is juju's over-the-wire communication mechanism, e.g. server-side sockets, documented anywhere publicly?
<wallyworld> not that i now of
<wallyworld> there may be an old wiki page somewhere
<timClicks> *websockets
<timClicks> wallyworld: okay that's fine.. I'll put that down as another todo
<wallyworld> one of many
<wallyworld> hpidcock: adding a new operation tag for the actions work https://github.com/juju/names/pull/102
<hpidcock> wallyworld: looking now
<wallyworld> ta
<wallyworld> hpidcock: i hate go dep, bring on go mod https://github.com/juju/names/pull/103
<hpidcock> one of us one of us one of us!
<wallyworld> ta. i hope this works
<wallyworld> and it did, dep ensure is happy now
<tlm> hpidcock: I think we may need to change this watcher design a little, the go routine has broken these tests because they wants such strict ordering
<tlm> got time for a HO ?
<wallyworld> tlm: it's not possible to use expectanytimes?
<wallyworld> and /or expect after etc?
<wallyworld> can jump in standup if you want
<tlm> just looking at this now. Can  you give me 20, will try and get a better example of the problem I have  hit
<wallyworld> ok
<tlm> wallyworld: might be user error, if you got 10 minutes would love to rubber duck it
<wallyworld> ok
<timClicks> signing heading off for the week
<tlm> cya
<hpidcock> oops, how did you go tlm?
<tlm> good think we found a better path
<hpidcock> the gift that keeps on giving
<wallyworld> tlm: it's ok if the watcher stuff misses 2.7.2 - it's not ciritical as nothing is broken per se. we can finsih it up monday
<tlm> wallyworld: yeah thats fine
<tlm> PR will be in my monday
<tlm> just a bit of boilerplate to mock all this in
<nammn_de> manadart: added the changes and phase to skip the model if its not controllermodel for settingsdelta.
<nammn_de> in regards to specprep call for removespace: Should we include rename-space or would you say it's done and its not needed?
<nammn_de> https://github.com/juju/juju/pull/11143
<manadart> nammn_de: I don't think we need to spend time with specification given how far it is along - we'll just do what we need to land the patch.
<nammn_de> manadart: i also updated the qa steps in the pr to match the change
<manadart> nammn_de: I will review 11143 when I get a chance. I need to make sure I get this critical fix for 2.7.2 done first.
<nammn_de> manadart: great,  sure no worries! Gimme a ping, when you find the chance for the spec talk
<stickupkid_> achilleasa, you free ?
<achilleasa> sure
<achilleasa> daily?
<stickupkid_> yeah
<stickupkid_> damn it, I missed a PR
<stickupkid_> :(
<stickupkid_> it's the one where I re-wrote the firewall migration import as well
<manadart> stickupkid_: Got a sec for HO?
<stickupkid_> sure
<achilleasa> manadart: quick HO?
<manadart> achilleasa: OMW
<manadart> Fix for the machine address space ID upgrade step: https://github.com/juju/juju/pull/11167
<stickupkid_> well that back port went smoother now I added the missing PR
<stickupkid_> yisss, go vet passed, we've got futher than last time :|
<stickupkid_> such a good idea to have this static analysis!
 * stickupkid_ pats self on back
<achilleasa> manadart: from a quick inspection of some of the uniter calls (OpenPorts) in particular, I think it might be better to have the state bits return back ModelOperation instances which then get aggregated in a single operation
<achilleasa> there are parts that have side-effects (e.g. update doc fields once the txn completes)
<achilleasa> s/doc fields/model fields/
<achilleasa> we could of course throw in an adaptor if the core doesn't care about Done and only implements Build
<manadart> achilleasa: Happy to take a look if you get a draft up.
<achilleasa> manadart: not quite there yet but will try to get something up for comments
<manadart> nammn_de: You want to chat before standup?
<nammn_de> manadart: sure!
<achilleasa> manadart: here is an example for OpenPorts: https://paste.ubuntu.com/p/2jhgH2xgW7/ I have patched the original OpenPorts method to fetch and apply the operation. The facade will eventually call into OpenPorts (through unit.OpenPorts and friends) and compose the modeloperation with other ops
<achilleasa> composing the txn bits is not trivial though; if an error occurs we won't know which []txn.Op caused it to pass the error to the Done method
<stickupkid_> achilleasa, can you not change how txn works?
 * achilleasa runs away
<rick_h> lol
<achilleasa> will provide more info in standup. Maybe I am missing something
<stickupkid_> achilleasa, RunTransactionObserver might be your friend
<stickupkid_> https://paste.ubuntu.com/p/7V85FZTbrW/
<stickupkid_> it did pass integration tests :)
<rick_h> guild if anyone wants to have fun with the community check out https://github.com/juju/gomaasapi/pull/85
<nammn_de> manadart: I took a quick look and qa on your pr. Added something I saw
<hml> achilleasa:  RE:  11166, iâm on the fence for both points.  We can chat it out?
<achilleasa> meet you in daily in 1m
<hml> achilleasa:  omw
<hml> rick_h: want to join our conversation. ^^
<nammn_de> manadart: can a machine have an address in that space, but not a constraint?
<manadart> Yes. But if it is not constrained, we still allow renaming.
<nammn_de> manadart: makes sense, thanks
<manadart> nammn_de: Added another review to you patch.
<manadart> nammn_de: Also, a suggested simplification for you op/factory: https://pastebin.canonical.com/p/4vgXNZYQVR/
<rick_h> hml:  sorry, was rebooting/etc still going?
<rick_h> wheee waving from Description: Ubuntu Focal Fossa (development branch)
<manadart> nammn_de: Thanks for the review. Can you or stickupkid_ approve it?
<rick_h> looks like folks are done. hml let me know if you want to catch up
<stickupkid_> manadart, i didn't do q&a... nammn_de can you approve once you've done it
<nammn_de> done
<manadart> Ta.
<hml> rick_h:  we had to stop. can catch up now?
<rick_h> hml:  omw
<nammn_de> manadart: regarding your comment: https://github.com/juju/juju/pull/11143#discussion_r373516389 do move the method to `ConstraintsOpsForSpaceNameChange`. I intentionally wrote the logic outside in the `rename.go` file. To comply with your review I have 2 ideas: - move the space logic into the `ConstraintsOpsForSpaceNameChange` or give the
<nammn_de> `ConstraintsOpsForSpaceNameChange` a kind `filter func()` as a second param to apply the space changes. Or did you have something else in mind?
<hml> simple review for someone? https://github.com/juju/juju/pull/11169
<hml> achilleasa:  iâve udpated 11166 with concensus from rick_h , i believe your concerns are addressed as well.  pls let me know
<achilleasa> hml: should I wait for the rename commit before reviewing?
<hml> achilleasa:  yes, just to be sure.  could you do QA though?  only the state-set key= piece will change.  just so i have a heads up of issues while iâm at the name change
<achilleasa> will do
<hml> achilleasa:  ty
<nammn_de> manadart: for 2 of your review points I am unsure what you mean. I added comments to them
<achilleasa> stickupkid_: RunTransactionObserver won't help as it doesn't tell you which Op failed; however, I think I can work around this by moving error annotations in Build()... should be good enough for the ConsolidateState call I am working on (I hope)
<manadart> Need a review for the instance poller fix for space names: https://github.com/juju/juju/pull/11170
<rick_h> guild if anyone has time please help ^ for release setup
<achilleasa> manadart: looking
<manadart> achilleasa: Actually doesn't change the instance poller :) I need to do it here, because bootstrap also retrieves instance addresses.
<achilleasa> manadart: I think fixing it in the provider as you did is the best way to go
<stickupkid_> achilleasa, I'd be happy in making txn more aware of failed txn.Ops
<stickupkid_> achilleasa, just need to ensure not to leak info
<achilleasa> stickupkid_: I think we would need to tweak mgo to return tha index of the failed operation (maybe wrap it in an error that can be unwrapped)
<stickupkid_> achilleasa, exactly my thinking
<nammn_de> achilleasa: I am not 100% deep in all possibilities of spaces.  Thought you know more and as manadart is out. What possibilities exist? Given the comment from manadart https://github.com/juju/juju/pull/11143#discussion_r373477221
<achilleasa> stickupkid_: for the time being, the state bits that I am working on run the txn and annotate the error. I can probably do that in the build code and just return the first error out
<achilleasa> nammn_de: looking in 5'
<hml> achilleasa:  removing set  from utils, makes the problem bigger, as now 5 versions of the charm package now conflict.  which raises at least one question ofâ¦ why are we using 5 versions of charm
<achilleasa> hml: workaround is to remove Tags from utils/set
<achilleasa> we don't seem to be using it in juju
<nammn_de> stickupkid_:  is it possible from a test to read the content of a txn.Op? Without needing to apply it?
<achilleasa> hml: fifth time's a charm? :D
<hml> achilleasa:  lol
<hml> is it 5 oclock somewhere yet?
<achilleasa> here, in 11m ;-)
<hml> ha!
<stickupkid_> nammn_de, what do you mean, a txn.Op is a struct, you can do what you want, or am I missing something?
<stickupkid_> nammn_de, are you wanting to do a "dry run"?
<nammn_de> stickupkid_: yeah, right. I'am writing a test, which returns Ops. Something like a dry run would be great
<nammn_de> txn.Op saves my change in an Update interface. Which I probably need to parse back. Just thought we alreadyt had something in place for that
<nammn_de> like the dry run
<stickupkid_> so if you're writing a test, then shouldn't it be `reflect.DeepEqual(expected, txn.Op{Update: bson.D{{"$set", bson.D{{"message", message}}}})`
<nammn_de> stickupkid_: thanks! Probably what I had in mind
<hml> achilleasa:  i wonder if the problem is not creating a new utils.v2?
<nammn_de> stickupkid_: lol  ops[0].Update.(bson.D).Map()["$set"].(state.ConstraintsDoc).Spaces
<achilleasa> hml: left some comments in 11166; do you want me to defer QA till your changes land?
<hml> achilleasa:  looking
<hml> achilleasa:  why donât i make these changes and the others before qa
<stickupkid_> what's the Map?
<stickupkid_> nammn_de,
<nammn_de> stickupkid_: was just debugging through it and it seems that bson.D has a function to return the saved transaction as a map[string]interface{}
<achilleasa> stickupkid_: manadart this is my current approach to composing multiple ModelOperations together: https://paste.ubuntu.com/p/dbGNR3wh4w/
<stickupkid_> achilleasa, could you not ignore the err message, but log it out?
<stickupkid_> achilleasa, at least we're not ignore it completely
<stickupkid_> ?
<achilleasa> stickupkid_: ideally we would need to de-dup (most Done() impls will prob just return the error straight out) and pack in a multi-error
<achilleasa> I will try to come up with something bettern on Monday
<achilleasa> we could also simply return the error out
<achilleasa> but this snippet works as expected if err == nil
<hml> rick_h: do we have any precendent for hook tools returning not found
<rick_h> hml:  thinking, like what example?
<hml> rick_h: since weâre going with state-set can set an empty string and will not delete the key.
<hml> rick_h: the follow on to me is that state-get will return not found for keys not found.
<hml> rick_h: o.w. how to distinguish between not set (found) and empty string
<rick_h> hml:  hmm, at first I was thinking a non-0 return code (e.g. command failed)
<rick_h> hml:  but not sure
<rick_h> hml:  sec call
<hml> rick_h: relation-get and relation-set will return a not found
<rick_h> hml:  ok, then I guess let's do that
<hml> rick_h: agree,
<rick_h> hml:  thanks for checking into that
<skay> how would you add your current juju model to the command prompt? I thought I had a way to do it, but it is not working
<hml> skay:  what did you try?  iâm not totally sure how myself :-). but i have a maybe not great idea
<skay> I tried PS1="(\$(juju models | grep \* |cut -d ' ' -f 1))$PS1"
<hml> skay: the output of `juju switch` also shows the current model.  though youâd have to dump the controller and user pieces
<hml> itâs a one line output
<skay> oh, I thought that was just juju1
<hml> skay:  nope
#juju 2020-02-01
<rick_h> skay:  so I'd grep the current-controller from .local/share/juju/controllers and then have to match that with models.yaml "current-model"
<rick_h> skay:  at least if you wanted to do it sans juju cli, that's the on disk cache
#juju 2020-02-02
<timClicks> morning all
<_thumper_> morning
