[00:36] <larrymi_> nicbet: ack
[04:32] <konobi> is there a way to tell juju to stop asking for Ubuntu SSO creds?
[04:33] <konobi> (during deploy that is)
[04:41] <jrwren> publish your charm
[04:59] <spaok> if I have a subordinate charm one a service with multiple units, so each service lists my subordinate charm under it, can the subordinate charm get list of all the units in the service? seems like it only see's itself in related_units
[06:07] <marcoceppi> spaok: right, it can only see its peers
[06:07] <marcoceppi> spaok: what are you trying to get?
[06:36] <spaok> marcoceppi: I was thinking I needed to build a list of units in the charm to pass to haproxy, but I think from what I read about the haproxy charm it will union all the units that join with the same service_name so I might be ok
[06:37] <spaok> though I really wish the haproxy charm had a python example of how to send it the yaml structure it wants
[06:39] <spaok> i can't seem to figure out how to set the service structure it wants
[07:39] <stub> cholcombe: Yes. It needs to serialize the data it so it can detect changes and json was chosen as the format.
[08:15] <spaok> is relation_set local, or does it send data to the related unit
[08:18] <spaok> oh, interesting
[08:31] <stub> spaok: relation_set sets the data on the local end (the only place you can set data), which is visible to the remote units.
[08:32] <stub> spaok: A unit doesn't 'send' data on a relation. It sets the data on its local end and informs other units that it has changed.
[10:09] <socceroos> hi all
[10:10] <socceroos> Has anyone here tried using juju 2.0 to manage deployments on Digital Ocean?
[10:10] <socceroos> or any version of juju really...
[10:22] <socceroos> ping...
[12:10] <SimonKLB> is it possible to run charms that require authentication with the bundletester?
[12:11] <SimonKLB> and/or how can i run bundletester/amulet tests on charms that use resources?
[12:39] <anshul_> #juju
[12:41] <anshul_> facing an issue while testing Glance using Amulet.. Glance is loaded locally and while i execute JUJU STATUS the state does not change and it remains at glance:     charm: local:xenial/glance-150     exposed: false     service-status:       current: unknown       message: Waiting for agent initialization to finish       since: 06 Oct 2016 23:12:45+05:30     relations:       amqp:       - rabbitmq-server       cluster:       - glance  
[12:42] <anshul_> facing an issue while testing Glance using Amulet.. Glance is loaded locally and while i execute JUJU STATUS the state does not change and it remains at "Waiting for agent initialization to finish".. and if i change the series to trusty it works fine
[12:43] <anshul_> Please help
[12:45] <anshul_> facing an issue while testing Glance using Amulet.. Glance is loaded locally and while i execute JUJU STATUS the state does not change and it remains at "Waiting for agent initialization to finish".. and if i change the series to trusty it works fine
[13:53] <nicbet> larrymi_ what version of VMWare are you running? I just can't get this to work properly. I've tried juju 1.25-6 over night, as well as using trusty images. always the same result with the public key not being injected properly and bootstrap failing.
[14:14] <petevg> bradm: did you ever get a chance to move the bip stuff to its own namespace. If so, would you go and close out the old reviews for it? (one of them is here: https://code.launchpad.net/~josvaz/charms/trusty/bip/client_side_ssl-with_helper-lp1604894/+merge/301802)
[14:16] <larrymi_> nicbet: I'm also using 6.0 3620759 for ESXi and 3644788 for the vCenter. I suspect it's a different issue but you'd need to be able to get to the logs. If I had to guess, I would guess that cloud-init is not able to get to a resource (perhaps blocked by the firewall)
[14:17] <nicbet> larrymi_ I'll have to do shenanigans and mount the failed bootstrap VM's hard drive to a different vm to access the logs... let me see
[14:18] <larrymi_> nicbet: ok
[14:20] <nicbet> larrymi_ would you mind sharing your redacted custom cloud yaml for vsphere? maybe I'm missing a config entry.
[14:22] <larrymi_> nicbet:   vsphere:
[14:22] <larrymi_>     type: vsphere
[14:22] <larrymi_>     auth-types: [userpass]
[14:22] <larrymi_>     endpoint: **.***.*.***
[14:22] <larrymi_>     regions:
[14:22] <larrymi_>       dc0: {}
[14:23] <nicbet> pretty much what I have, for me dc0 is replaced with our datacenter name
[15:06] <nicbet> larrymi_ : cloud-init.log on the mounted drive states that neither DataSourceOVFNet, nor DataSourceOVF could load any data. Notably this line appears too: ubuntu [CLOUDINIT] DataSourceOVF.py[DEBUG]: Customization for VMware platform is disabled.
[15:07] <nicbet> larrymi_ : did you configure something special on the vmware itself to enable OVF customization?
[15:09] <shruthima> hi kwmonroe , We are charming websphere liberty charm for Z , This charm provides httpport, httpsport, and hostname.we thought to make use of http interface https://github.com/juju-solutions/interface-http but we noticed http interface will provide only httpport and hostname . So, we want know do we need to wright a new interface ?
[15:19] <larrymi_> nicbet: I didn't have to configure anything... just the export prior to bootstrap that I mentioned earlier, but it won't bootstrap at without that.
[15:19] <larrymi_> nicbet: interesting that it's disabled.. I wonder what it's keying on.
[15:21] <nicbet> larrymi_ the cloudimage has open-vm-tools package installed. upon boot they are started, i see that from the logs. Then cloud init tries all different data sources, like EC2 store, Config ISO in CDROm, etc. one of them is OVFDatasource, which uses vmware tools to grab the info from the ovf template configuration
[15:21] <nicbet> larrymi_ at that point it's not given anything, and logs the above line about guest customization being disabled
[15:23] <nicbet> larrymi_ dumb question ... are you running juju against a vCenter or a vSphere ESXI?
[15:24] <larrymi_> nicbet: a vCenter
[15:24] <larrymi_> nicbet: which log are you looking at?
[15:25] <nicbet> larrymi_ /var/log/cloud-init.log and /var/log/vmware-vmsvc.log together with https://github.com/number5/cloud-init/blob/master/cloudinit/sources/DataSourceOVF.py
[15:28] <larrymi_> nicbet: I do have the same Oct  5 19:57:55 ubuntu [CLOUDINIT] DataSourceOVF.py[DEBUG]: Customization for VMware platform is disabled.
[15:28] <larrymi_> nicbet: can probably be ignored
[15:29] <nicbet> larrymi_ do you find a line that says it loaded cloud-init data from the OVF source?
[15:30] <larrymi_> nicbet: yes, it then goes on to Oct  5 19:57:54 ubuntu [CLOUDINIT] __init__.py[DEBUG]: Seeing if we can get any data from <class 'cloudinit.sources.DataSourceNoCloud.DataSourceNoCloud'>
[15:31] <nicbet> larrymi_ grep 'data found' cloud-init.log prints all lines as 'no local data found from ***' where *** is the DataSource it tried
[15:33] <rufus> https://www.youtube.com/watch?v=lm64uOErZ8w
[15:34] <larrymi_> nicbet: yeah they're there
[15:34] <larrymi_> Oct  5 19:57:55 ubuntu [CLOUDINIT] handlers.py[DEBUG]: finish: init-local/search-NoCloud: SUCCESS: no local data found from DataSourceNoCloud
[15:34] <larrymi_> Oct  5 19:57:55 ubuntu [CLOUDINIT] handlers.py[DEBUG]: start: init-local/search-ConfigDrive: searching for local data from DataSourceConfigDrive
[15:34] <nicbet> larrymi_ is there any that says data was found?
[15:40] <larrymi_> nicbet: the logs don't say data was found specifically but they seems to indicate that it only fails to get the data locally.
[15:45] <nicbet> larrymi_ I have a hunch that this only works with vCenter
[15:54] <kwmonroe> shruthima: i wouldn't write a new interface, just open an issue and/or provide a pull request to include an https port here:  https://github.com/juju-solutions/interface-http
[15:55] <shruthima> ok sure kwmonroe
[15:59] <shruthima> kwmonroe: May i know when will be the xenial series of current IBM-IM charm will be pulled back ? so we can push the ibm-im for Z
[16:00] <kwmonroe> shruthima: i'll try to complete that before my EOD, so roughly 7 hours from now.
[16:01] <shruthima> oh k thanku
[16:11] <larrymi_> nicbet: yes could be. I haven't tried with ESXi host as endpoint
[16:14] <shruthima> kwmonroe: i have edited http interface provides.py http://paste.ubuntu.com/23285185/ requires.py http://paste.ubuntu.com/23285180/ locally and tested it is working fine. So is there any way to create a merge proposal for http interface ?
[16:16] <shruthima> kwmonroe: i have seen https://github.com/juju-solutions/interface-http/issues/5 similar issue is opened for http interface also
[16:48] <kwmonroe> shruthima: issue #5 would allow you to define multiple http interfaces in your charm's metadata and react to them differently (with different ports).  if that would solve your needs, you can just add a comment to issue #5 saying it would be useful to you as well.  if you instead require multiple ports for a single http interface, i think that's a separate issue.  you could run "git diff" in the directory where you made your
[16:48] <kwmonroe>  edits, paste the output at http://paste.ubuntu.com, and then include that paste link in a new issue here: https://github.com/juju-solutions/interface-http/issues.
[16:50] <shruthima> ok thanks kevin
[17:01] <holocron> could some one take a look at this pastebin and tell me what next steps to debugging might be? http://pastebin.com/7MHZV4e3
[17:03] <holocron> this, specifically: unit-ceph-1: 12:57:00 INFO unit.ceph/1.update-status admin_socket: exception getting command descriptions: [Errno 111] Connection refused
[17:21] <spaok> stub: I guess what I'm confused on is, when I try to set the services structure for when haproxy joins, if do hookenv.relation_set(relation_id='somerelid:1', service_yaml) then the joined/changed hook runs, haproxy doesn't get the services yaml, however if I do hookenv.relation_set(relation_id=None, service_yaml) then it does work, and haproxy builds the config right, but after a bit when the update-status runs it errors because relation_id isn't set
[17:23] <stub> spaok: Specifying None for the relation id means use the JUJU_RELATION_ID environment variable, which is only set in relation hooks. Specifying an explicit relation id does the same thing, but will work in all hooks. Provided you used the correct relation id.
[17:25] <stub> spaok: You can test this using "juju run foo/0 'relation-set -r somerelid:1 foo=bar'" if you want.
[17:28] <stub> (juju run --unit foo/0 now it seems, with 2.0)
[17:28] <stub> "juju run --unit foo/0 'relation-ids somerelname' " to list the relationids in play
[17:29] <bdx> hows it going everyone?
[17:29] <bdx> do I have the capability to bootstrap to a specific subnet?
[17:29] <bdx> using aws provider
[17:31] <lutostag> any way to specify "charm build" deps? (for instance in my wheelhouse I have cffi, which when the charm is built depends on having libffi-dev installed. I have this in the README, but wondering if that was possible to enforce programatically
[17:31] <spaok> thanks stub, I'm fairly certain I have the right relid, but when I see from the haproxy side is only the private_ip and unit id something else, with None, I get the services yaml, its very confusing
[17:31] <spaok> I'll look at trying that command, I was wondering how to run the relation-set command
[17:32] <spaok> lutostag: layers.yaml ?
[17:32] <spaok> not 100% on that
[17:34] <lutostag> spaok: yeah, I have deps for install time unit-side like http://pastebin.ubuntu.com/23285512/, but not sure how to do it "charm build" side
[17:37] <spaok> ah gotca, not sure
[17:38] <lutostag> think I'll go with a wrapper around charm build in the top-level dir, don't need to turn charm into pip/apt/npm/snap anyways
[17:53] <kwmonroe> hey icey - can you help holocron with his ceph connection refused?  http://pastebin.com/7MHZV4e3
[17:57] <holocron> Something in dpkg giving an Input/output error
[18:00] <kwmonroe> lutostag: seems odd that an entry in your wheelhouse.txt would require a -dev to be installed for charm build
[18:02] <lutostag> kwmonroe: doesnt it though, it builds it locally. by compiling stuff, I guess there are c-extentsions in the python package itself
[18:02] <lutostag> lxml is another example
[18:02] <kwmonroe> cory_fu_:  does charm build have runtime reqs dependent on the wheelhouse?
[18:02] <kwmonroe> cory_fu_: (see lutostag's issue above)
[18:03] <lutostag> (but I can get around that one, cause we have that deb'd up)
[18:04] <holocron> kwmonroe icey going to try this http://askubuntu.com/questions/139377/unable-to-install-any-updates-through-update-manager-apt-get-upgrade
[18:04] <cory_fu_> lutostag, kwmonroe: You shouldn't need a -dev package for charm-build because it *should* be installing source only and building them on the deployed unit, since we don't know the arch ahead of time.
[18:05] <lutostag> ah, so I'll need these -dev's on the unit-side, good to know, but interesting.
[18:05] <kwmonroe> holocron: that doesn't sound like a ceph problem then... got a log with the dpkg error?
[18:06] <cory_fu_> lutostag: What's the repo for cffi so I can try building it?
[18:06] <lutostag> cory_fu_: my charm or the upstream python package?
[18:07] <cory_fu_> lutostag: The charm
[18:07] <lutostag> cory_fu_: lemme push...
[18:07] <cory_fu_> Sorry, I misread cffi as the charm name
[18:08] <holocron> kwmonroe: similar messages to this: http://pastebin.com/XZ0uFfz8
[18:08] <holocron> I've had make, and build-essential give the error, right now it's libdpkg-perl
[18:10] <kwmonroe> holocron: when do you see that?  on charm deploy
[18:10] <lutostag> cory_fu_: bzr branch lp:~lutostag/oil-ci/charm-weebl+weasyprint-dep
[18:11] <holocron> kwmonroe no, the charm deployed fine yesterday, it came in as part of the openstack-on-lxd bundle. i was able to create a cinder volume even.. logged in today and saw that error in my first pastebin
[18:11] <lutostag> (and I'll need to add the deps as explained too)
[18:11] <holocron> i logged into the unit and did an apt-get clean and apt-get update
[18:11] <holocron> now it's failing with this
[18:12] <holocron> is it common practice to scale out another unit and then tear down the breaking one?
[18:12] <holocron> like, should i just make that my default practice or should i try to fix this unit?
[18:12] <kwmonroe> holocron: common practice for for the breaking unit not to break
[18:13] <kwmonroe> especially on some nonsense apt operation
[18:13] <holocron> :P yeah that's the ideal
[18:13] <kwmonroe> is that unit perhaps out of disk space holocron?
[18:13] <kwmonroe> or inodes?  (df -h && df -i)
[18:13] <holocron> nope, lots of space
[18:14] <holocron> lots of inodes
[18:16] <kwmonroe> holocron: anything in dmesg, /var/log/syslog, or /var/log/apt/* on that container that would help explain the dpkg failure?
[18:18] <holocron> kwmonroe probably,sorry i've got to jump to another thing now but i'll try to get back
[18:19] <kwmonroe> np holocron
[18:36] <Siva> I am trying to remove my application in juju 2.0 but it is not working
[18:36] <Siva> I put pdb.set_trace() in my code
[18:36] <Siva> Not sure if it is because of that
[18:36] <Siva> juju remove-application does not remove the application
[18:37] <Siva> How do I now forcefully remove it?
[18:37] <Siva> Any help is much appreciated
[18:38] <spaok> is there a decorator for the update-status hook? or do I use @hook?
[18:40] <Siva> It is stuck at the install hook where I put pdb
[18:40] <Siva> I have the following decorator for the install hook, @hooks.hook()
[18:41] <spaok> Siva can you remove the machine?
[18:41] <lutostag> Siva: juju resolved --no-retry <unit> # over and over till its gone
[18:41] <kwmonroe> Siva: remove-application will first remove relations between your app and something else, then it will remove all units of your app, then it will remove the machine (if your app was the last unit on a machine)
[18:42] <kwmonroe> Siva: you're probably trapping the remove-relation portion of remove-application, which means you'll need to continue or break or whatever pdb takes to let it continue tearing down relations / units / machines
[18:43] <stub> Siva: The hook will likely never complete, so you either need to go in and kill it yourself or run 'juju remove-machine XXX --force'
[18:43] <kwmonroe> so lutostag's suggestion would work -- keep resolving the app with --no-retry to make your way through the app's lifecycle.  or spaok's suggestion might be faster -- juju remove-machine X --force
[18:43] <spaok> I work with containers, so I tend to do that mostly
[18:44] <stub> (and haven't we all left our Python debugger statements in a hook at some point)
[18:44] <Siva> I removed the machine, it shows the status as 'stopped'
[18:44] <spaok> takes a sec
[18:44] <kwmonroe> keep watching.. it'll go away
[18:45] <spaok> also rerun the remove-application
[18:45] <Siva> OK. @lutostag suggestion did the trick
[18:45] <spaok> I've notcied some application ghosts when I remove machines
[18:45] <Siva> Now it is gone
[18:46] <Siva> Thanks
[18:47] <stub> spaok: yes, its perfectly valid to have an application deployed to no units. Makes sense when setting up subordinates, more dubious with normal charms.
[18:48] <Siva> One thing I noticed is that after removing the machine (say 0/lxd/14 is removed) now when you deploy it creates the machine 0/lxd/15 rather than 14
[18:48] <Siva> is this expected?
[18:49] <spaok> ya
[18:49] <Siva> same for units as well
[18:49] <spaok> yup
[18:49] <spaok> makes deployer scripts fun
[18:49] <Siva> Why does it not consider the freed ones so that it is in sequence?
[18:50] <rick_h_> Siva: mainly because it makes things like logs more useful when the numbers are unique
[18:50] <rick_h_> Siva: especially as everything is async
[18:51] <rick_h_> Siva: so you can be sure any logs re: unit 15 are in fact the unit that was destroyed at xx:xx
[18:51] <Siva> OK
[18:52] <Siva> It looks a bit weird in the 'juju status' as the sequence is  broken
[18:52] <rick_h_> Siva: yea, understand, but for the best imho
[19:03] <spaok> Siva: why I have scripts to destroy and recreate my MAAS/Juju 2.0 dev env, good to reset sometimes
[19:05] <Siva> One question
[19:05] <Siva> I have the following entry in the config.yaml file
[19:05] <Siva> tor-type:     type: string     description: Always ovs     default: ovs
[19:06] <Siva> Now when do, tor_type = config.get("tor_type")     print "SIVA: ", tor_type
[19:06] <Siva> I expect it to print the default value 'ovs' as I have not set any value
[19:06] <Siva> It prints nothing
[19:06] <Siva> Is this a expected?
[19:06] <spaok> tor_type or tor-type?
[19:07] <Siva> oops! my bad
[19:07] <spaok> I put underscores in my config.yaml
[19:08] <Siva> Now after making the change, I can just 'redeploy', right?
[19:08] <spaok> you can test by editing the charm live if you wanted
[19:08] <Siva> How do I do that?
[19:09] <spaok> vi /var/lib/juju/agents/unit-charmname-id/charms/config.yaml
[19:09] <spaok> kick jujud in the pants by
[19:09] <spaok> ps aux |grep jujud-unit-charmname |grep -v grep | awk '{print $2}' | xargs kill -9
[19:09] <spaok> should cause the charm to run
[19:19] <Siva> I modified the config.yaml and restarted the jujud
[19:19] <Siva> still prints None
[19:30] <spaok> Siva: in my reactive charm I just config('tor_type')
[19:30] <spaok> not config.get
[19:30] <spaok> not sure the diff
[19:56] <Siva> I removed the app and deployed it again
[19:56] <Siva> config.get works
[19:57] <Siva> I can try config('tor_type') and see how it goes
[20:24] <cory_fu_> kwmonroe: Comments on https://github.com/juju-solutions/layer-apache-bigtop-base/pull/50
[20:25] <icey> hey holocron kwmonroe just seeing the messages
[20:25] <icey> to me, the line saying admin_socket: connection refused is more interesting; it looks like maybe the ceph monitor is down?
[20:27] <holocron> icey hey, i ended up tearing down the controller. I'm going to redeploy now and i'll drop a line in here if it happens again
[20:28] <icey> holocron:  great; kwmonroeis right though, the expectation is that it wouldn't break
[20:30] <Siva> @spaok, live config.yaml change testing does not work for me
[20:40] <kwmonroe> cory_fu_: i like the callback idea in https://github.com/juju-solutions/layer-apache-bigtop-base/pull/50.  but i'm not gonna get to that by tomorrow, and i really want the base hadoop charms refreshed (which depend on the pr as is).  you ok if i open an issue to do it better in the future, but merge for now?
[20:41] <kwmonroe> cory_fu_: it doesn't matter what you say, mind you.  petevg already +1'd it.  just trying to fake earnest consideration.
[20:41] <cory_fu_> ha
[20:42] <cory_fu_> kwmonroe: I'm also +1 on it as-is, but I'd like to reduce boilerplate where we can.  But we can go back and refactor it later
[20:44] <cory_fu_> kwmonroe: Issue opened and PR merged
[20:44] <cory_fu_> kwmonroe: And I'm good on the other commits you made for the xenial updates
[20:48] <kwmonroe> dag nab cory_fu_!  you're fast.  i was still pontificating on the title of a new issue.  thanks!!
[20:50] <kwmonroe> and thanks for the xenial +1.  i'll squash, pr, and set the clock for 24 hours till i can push ;)
[20:50] <kwmonroe> just think, you'll be swimming when the upstream charms get refreshed.
[20:50] <kwmonroe> nice knowing you
[20:58] <kwmonroe> before you go cory_fu_.. did you see mark's note to the juju ML about blocked status?  seems the slave units are reporting "blocked" even when a relation is present.  i'm pretty sure it's because of this:  https://github.com/juju-solutions/bigtop/blob/master/bigtop-packages/src/charm/hadoop/layer-hadoop-slave/reactive/hadoop_status.py#L22
[20:59] <kwmonroe> as in, there *is* a spec mismatch until NN/RM are ready.  what's the harm in setting status with a spec mismatch?
[21:01] <kwmonroe> afaict, they'll report "waiting for ready", which seems ok to me.  unless we want to add a separate block for specifically dealing with spec mismatches, which would be some weird state between waiting and blocked to see if the spec ever does eventually match.
[21:08] <cory_fu_> kwmonroe: I think the problem is one of when hooks are triggered.  I think that what's happening is that the relation is established and reported, but the hook doesn't get called on both sides right away, leaving one side reporting "blocked" even though the relation is there, simply because it hasn't been informed of it yet
[21:11] <kwmonroe> i think i don't agree with ya cory_fu_... NN will send DN info early (https://github.com/juju-solutions/bigtop/blob/bug/BIGTOP-2548/xenial-charm-refresh/bigtop-packages/src/charm/hadoop/layer-hadoop-namenode/reactive/namenode.py#L27).  but check out what's missing... https://github.com/juju-solutions/interface-dfs/blob/master/requires.py#L95
[21:12] <cory_fu_> kwmonroe: Doesn't matter.  The waiting vs blocked status only depends on the .joined state: https://github.com/apache/bigtop/blob/master/bigtop-packages/src/charm/hadoop/layer-hadoop-slave/reactive/hadoop_status.py#L36
[21:12] <kwmonroe> spoiler alert cory_fu_:  it's clustername.  we don't send that until NN is all the way up, so the specmatch will be false:  https://github.com/juju-solutions/bigtop/blob/bug/BIGTOP-2548/xenial-charm-refresh/bigtop-packages/src/charm/hadoop/layer-hadoop-namenode/reactive/namenode.py#L142
[21:13] <kwmonroe> cory_fu_: you crazy:  https://github.com/apache/bigtop/blob/master/bigtop-packages/src/charm/hadoop/layer-hadoop-slave/reactive/hadoop_status.py#L22
[21:13] <cory_fu_> kwmonroe: Ooh, I see.
[21:13] <cory_fu_> We should remove that @when_none line.  There's no reason for it
[21:14] <kwmonroe> great idea cory_fu_!  if only i had it 15 minutes ago.
[21:14] <cory_fu_> :)
[21:15] <kwmonroe> petevg: you mentioned you also saw charms blocked on missing relations (maybe zookeeper?).  could it be you saw the slaves blocked instead?
[21:16] <neiljerram> Aaaargh guys, would you _please_ stop making gratuitous changes in every Juju 2 beta or rc?
[21:17] <neiljerram> The latest one that has just bitten my testing is the addition of a * after the unit name in juju status output.
[21:17] <neiljerram> Before that it was 'juju set-config' being changed to 'juju config'
[21:17] <neiljerram> This is getting old....
[21:23] <petevg> kwmonroe: Yes. I think that it was probably just the hadoop slave.
[21:23] <kwmonroe> neiljerram: apologies for the headaches!  but you should see much more stability in the RCs.  at least for me, api has been pretty consistent with rc1/rc2.  rick_h_ do you know if there are significant api/output changes in the queue from now to GA?
[21:24] <kwmonroe> thx petevg - fingers crossed that was the only outlier
[21:24] <petevg> np
[21:24] <petevg> fingers crossed.
[21:25] <neiljerram> kwmonroe, tbh I'm afraid I have to say that I think things have been _less_ stable since the transition from -beta to -rc.  My guess is that there are changes that people have been thinking they should do for a while, but only now that the GA is really looking likely so they think that they should get them out :-)
[21:27] <neiljerram> kwmonroe, but don't worry, I've had my moan now...
[21:27] <kwmonroe> heh neiljerram - fair enough :)
[21:28] <neiljerram> Do you happen to know what the new * means?  Should I expect to see it on every juju status line?
[21:29] <kwmonroe> neiljerram: i was just about to ask you the same thing... i haven't seen the '*'
[21:29] <kwmonroe> neiljerram: you on rc1, 2, or 3?
[21:30] <neiljerram> kwmonroe, rc3 now; here's an excerpt from I test I currently have running:
[21:30] <neiljerram>        UNIT                WORKLOAD  AGENT  MACHINE  PUBLIC-ADDRESS   PORTS  MESSAGE
[21:30] <neiljerram>        calico-devstack/0*  unknown   idle   0        104.197.123.208
[21:30] <neiljerram>        
[21:30] <neiljerram>        MACHINE  STATE    DNS              INS-ID         SERIES  AZ
[21:30] <neiljerram>        0        started  104.197.123.208  juju-0f506f-0  trusty  us-central1-a
[21:32] <neiljerram> kwmonroe, just doing another deployment with more units, to get more data
[21:32] <kwmonroe> hmph... neiljerram i wonder if that's an attempt to truncate the unit name to a certain length.  doesn't make sense in your case, but i could see 'really-long-unit-name/0' being truncated to 'really-long-u*' to keep the status columns sane.
[21:32] <kwmonroe> just a guess neiljerram
[21:33] <kwmonroe> and at any rate neiljerram, if you're scraping 'juju status', you might want to consider scraping 'juju status --format=tabular', which might be more consistent.
[21:33] <neiljerram> kwmonroe, BTW the reason this matters for my automated testing is that I have some quite tricky code that is trying to determine when the deployment as a whole is really ready.
[21:33] <kwmonroe> ugh, not right
[21:34] <kwmonroe> sorry, i meant 'juju status --format=yaml', not tabular
[21:34] <neiljerram> kwmonroe, yes, I suppose that would probably be better
[21:36] <neiljerram> kwmonroe, Ah, it seems that * means 'no longer waiting for machine'
[21:44] <kwmonroe> neiljerram: you sure?  i just went to rc3 and deployed ubuntu... i still see:
[21:44] <kwmonroe> UNIT      WORKLOAD  AGENT       MACHINE  PUBLIC-ADDRESS  PORTS  MESSAGE
[21:44] <kwmonroe> ubuntu/0  waiting   allocating  0        54.153.95.194          waiting for machine
[21:44] <neiljerram> kwmonroe, exactly - because the machine hasn't been started yet
[21:44] <kwmonroe> oh, nm neiljerram, i should wait longer.. you said the '*' is....
[21:44] <kwmonroe> right
[21:45] <kwmonroe> i gotta say, intently watching juju status is right up there with the birth of my first child
[21:51] <kwmonroe> neiljerram: i can't get the '*' after the machine is ready, nor using a super long unit name.  i'm not sure where that's coming from.
[21:51] <kwmonroe> UNIT                       WORKLOAD     AGENT      MACHINE  PUBLIC-ADDRESS  PORTS  MESSAGE
[21:51] <kwmonroe> really-long-ubuntu-name/0  maintenance  executing  1        54.153.97.184          (install) installing charm software
[21:51] <kwmonroe> ubuntu/0                   active       idle       0        54.153.95.194          ready
[21:52] <neiljerram> kwmonroe, do you have rc3?
[21:52] <kwmonroe> neiljerram: i do... http://paste.ubuntu.com/23286448/
[21:55] <neiljerram> kwmonroe, curious, I don't know then.  I'm also using AWS, so it's not because we're using different clouds.
[21:56] <kwmonroe> neiljerram: i'm aws-west.  if you're east, it could be signifying the hurricane coming to the east coast this weekend.
[21:56] <neiljerram> kwmonroe, :-)
[21:56] <kwmonroe> rc3 backed by weather.com ;)
[21:57] <kwmonroe> neiljerram: care to /join #juju-dev?  i'll ask the core devs where the '*' is coming from
[21:57] <neiljerram> kwmonroe, sure, will do
[22:13] <kwmonroe> for anyone following along, the '*' denotes leadership
[22:29] <magicaltrout> i thought you were the leader kwmonroe
[22:29] <kwmonroe> that's kwmonroe* to you magicaltrout
[22:30] <magicaltrout> Texas' own Idi Amin
[22:30] <kwmonroe> magicaltrout: you still in souther california?  or did you get back to the right side of the atlantic?
[22:31] <magicaltrout> hehe
[22:31] <magicaltrout> i'm back in the motherland for now
[22:31] <magicaltrout> been instructed to report to Washington DC on the 28th November
[22:31] <magicaltrout> so not for long
[22:32] <magicaltrout> although i was hoping a nice sunny jaunt to ApacheCon EU was gonna be the last trip of the year
[22:34] <kwmonroe> must be hard being so popular magicaltrout
[22:34] <magicaltrout> trololol
[22:34] <magicaltrout> whatever
[22:34] <magicaltrout> kwmonroe: is cory_fu_ staying in your basement?
[22:35] <kwmonroe> magicaltrout: two things you should know about central Texas:  1) it's all bedrock; no basements.  2) cory_fu_ was a track lead at the summit; he stays at the Westin.
[22:37] <magicaltrout> Westin? and you lot dumped us at the Marriot?
[22:37] <magicaltrout> I need to upgrade
[22:37] <magicaltrout> get me a real community sponsor!
[22:37] <kwmonroe> apply for track lead in Ghent next year ;)
[22:38] <kwmonroe> comes with a bright orange shirt.. wearable anywhere!
[22:38] <magicaltrout> they did look nice....... sadly I'll be too drunk
[22:38] <magicaltrout> oh
[22:38] <magicaltrout> that never stopped you guys
[22:39] <magicaltrout> i could lead the "werid big data - container crossover"
[22:39] <magicaltrout> s/werid/weird
[22:40] <kwmonroe> pretty sure you're already leading that
[22:40] <magicaltrout> hehe
[22:40] <kwmonroe> i don't understand why mapreduce wasn't enough for you
[22:40] <magicaltrout> yeah i've been tapping up the mesos mailing list the last few days trying to figure out what needs to be done to get LXC support in their container stack
[22:41] <kwmonroe> everything can be solved with mapred.  and if it can't, map it again.
[22:41] <magicaltrout> i'm not a C programmer though so it might take me a while unless my IDE-fu wins
[22:43] <kwmonroe> try emacs
[22:44] <magicaltrout> I use emacs actually kwmonroe :P
[22:44] <magicaltrout> just not for coding :)
[22:44] <kwmonroe> sure magicaltrout.. it's also good as a desktop environment and for playing solitaire.
[22:44] <magicaltrout> exactly see
[22:44] <magicaltrout> you know kwmonroe
[22:44] <magicaltrout> you know
[22:45] <magicaltrout> sorry.... kwmonroe*
[22:45] <kwmonroe> so magicaltrout*, how can we help you with apachecon seville?  you've got some drill bit work i pressume?
[22:47] <magicaltrout> yeah. Plan do to a bigtop & drill demo
[22:47] <magicaltrout> get some stuff in a bundle so willing volunteers can test etc
[22:47] <magicaltrout> if that latest RC changelog isn't a lie.... drill will even work in LXC which is a bonus
[22:47] <kwmonroe> roger that magicaltrout.. i'll volunteer!
[22:48] <magicaltrout> for the first time ever, the Juju talk is the easier of the two. I'll knock something together next week and we can iterate over it
[22:49] <magicaltrout> we've got a month or so
[22:49] <magicaltrout> try and not leave it to the last minute for a change
[22:49] <kwmonroe> !remindme 1 month
[22:49] <magicaltrout> yeah, well for ApacheCon NA i was writing the talks on the plane
[22:49] <magicaltrout> so you know....
[22:49] <magicaltrout> how bad can it be?
[22:50] <kwmonroe> it could be as bad as a 6 out of 10. but no worse.
[22:51] <magicaltrout> thanks for the reassurance
[23:24] <bdx> cmars: https://github.com/cmars/juju-charm-mattermost/pull/2/files
[23:25] <cmars> bdx, why does it need nginx embedded?
[23:26] <cmars> bdx, the systemd support is nice
[23:26] <bdx> cmars: so I can give my users a clean fqdn with no ports hanging on the end
[23:27] <bdx> cmars: plus, mattermost docs suggest it
[23:27] <cmars> bdx, ok, that makes sense
[23:27] <cmars> bdx, is it worth exposing the mattermost-port at all then?
[23:28] <cmars> might just leave it fixed and local only..
[23:28] <bdx> ok
[23:29] <cmars> bdx, also, let's remove trusty from the series in metadata
[23:29] <bdx> I thought i did ... checking
[23:30] <cmars> bdx, ah, you did, my bad
[23:32] <bdx> cmars: there ya go
[23:32] <lutostag> how would I fix 2016-10-06 23:30:28 ERROR juju.worker.dependency engine.go:526 "leadership-tracker" manifold worker returned unexpected error: leadership failure: lease manager stopped... which daemon should I kick on the unit?
[23:33] <lutostag> 2.0 beta7 (can't tear down and redeploy for a little while unfortunately)
[23:36] <cmars> bdx, thanks, i'll have to test this out, but i could publish it soon. probably need to update the resource as well
[23:39] <bdx> cmars: totally, I was thinking of adding a tls-certificates interface, so if a user desired to have ssl, they could just relate to the easyrsa charms
[23:39] <cmars> bdx, oooh nice!
[23:40] <bdx> actually, I feel that functionality should be a part of the nginx charm though
[23:40] <cmars> bdx, do we have a LE layer yet?
[23:40] <cmars> that'd be really nice for a private secure mattermost
[23:40] <bdx> stokachu: ^^
[23:40] <bdx> cmars, stokachu: https://jujucharms.com/u/containers/easyrsa/2
[23:40] <bdx> LE layer?
[23:41] <cmars> let's encrypt
[23:41] <bdx> oooo
[23:42] <bdx> cmars: there should be
[23:42] <bdx> I know we have discussed it
[23:57] <stokachu> lutostag: switch to the admin controller and ssh into machine 0
[23:57] <stokachu> lutostag: then just pkill jujud and it'll restart and pick back up
[23:57] <stokachu> bdx: sorry some extra context?
[23:57] <stokachu> ah i see nginx