/srv/irclogs.ubuntu.com/2016/10/06/#juju.txt

larrymi_nicbet: ack00:36
=== skr is now known as Guest61072
konobiis there a way to tell juju to stop asking for Ubuntu SSO creds?04:32
konobi(during deploy that is)04:33
jrwrenpublish your charm04:41
spaokif I have a subordinate charm one a service with multiple units, so each service lists my subordinate charm under it, can the subordinate charm get list of all the units in the service? seems like it only see's itself in related_units04:59
marcoceppispaok: right, it can only see its peers06:07
marcoceppispaok: what are you trying to get?06:07
spaokmarcoceppi: I was thinking I needed to build a list of units in the charm to pass to haproxy, but I think from what I read about the haproxy charm it will union all the units that join with the same service_name so I might be ok06:36
spaokthough I really wish the haproxy charm had a python example of how to send it the yaml structure it wants06:37
spaoki can't seem to figure out how to set the service structure it wants06:39
=== frankban|afk is now known as frankban
stubcholcombe: Yes. It needs to serialize the data it so it can detect changes and json was chosen as the format.07:39
spaokis relation_set local, or does it send data to the related unit08:15
spaokoh, interesting08:18
=== ant_ is now known as Guest48995
stubspaok: relation_set sets the data on the local end (the only place you can set data), which is visible to the remote units.08:31
stubspaok: A unit doesn't 'send' data on a relation. It sets the data on its local end and informs other units that it has changed.08:32
soccerooshi all10:09
socceroosHas anyone here tried using juju 2.0 to manage deployments on Digital Ocean?10:10
socceroosor any version of juju really...10:10
socceroosping...10:22
SimonKLBis it possible to run charms that require authentication with the bundletester?12:10
SimonKLBand/or how can i run bundletester/amulet tests on charms that use resources?12:11
anshul_#juju12:39
anshul_facing an issue while testing Glance using Amulet.. Glance is loaded locally and while i execute JUJU STATUS the state does not change and it remains at glance:     charm: local:xenial/glance-150     exposed: false     service-status:       current: unknown       message: Waiting for agent initialization to finish       since: 06 Oct 2016 23:12:45+05:30     relations:       amqp:       - rabbitmq-server       cluster:       - glance  12:41
anshul_facing an issue while testing Glance using Amulet.. Glance is loaded locally and while i execute JUJU STATUS the state does not change and it remains at "Waiting for agent initialization to finish".. and if i change the series to trusty it works fine12:42
anshul_Please help12:43
anshul_facing an issue while testing Glance using Amulet.. Glance is loaded locally and while i execute JUJU STATUS the state does not change and it remains at "Waiting for agent initialization to finish".. and if i change the series to trusty it works fine12:45
=== scuttle|afk is now known as scuttlemonkey
nicbetlarrymi_ what version of VMWare are you running? I just can't get this to work properly. I've tried juju 1.25-6 over night, as well as using trusty images. always the same result with the public key not being injected properly and bootstrap failing.13:53
petevgbradm: did you ever get a chance to move the bip stuff to its own namespace. If so, would you go and close out the old reviews for it? (one of them is here: https://code.launchpad.net/~josvaz/charms/trusty/bip/client_side_ssl-with_helper-lp1604894/+merge/301802)14:14
larrymi_nicbet: I'm also using 6.0 3620759 for ESXi and 3644788 for the vCenter. I suspect it's a different issue but you'd need to be able to get to the logs. If I had to guess, I would guess that cloud-init is not able to get to a resource (perhaps blocked by the firewall)14:16
nicbetlarrymi_ I'll have to do shenanigans and mount the failed bootstrap VM's hard drive to a different vm to access the logs... let me see14:17
larrymi_nicbet: ok14:18
nicbetlarrymi_ would you mind sharing your redacted custom cloud yaml for vsphere? maybe I'm missing a config entry.14:20
larrymi_nicbet:   vsphere:14:22
larrymi_    type: vsphere14:22
larrymi_    auth-types: [userpass]14:22
larrymi_    endpoint: **.***.*.***14:22
larrymi_    regions:14:22
larrymi_      dc0: {}14:22
nicbetpretty much what I have, for me dc0 is replaced with our datacenter name14:23
nicbetlarrymi_ : cloud-init.log on the mounted drive states that neither DataSourceOVFNet, nor DataSourceOVF could load any data. Notably this line appears too: ubuntu [CLOUDINIT] DataSourceOVF.py[DEBUG]: Customization for VMware platform is disabled.15:06
nicbetlarrymi_ : did you configure something special on the vmware itself to enable OVF customization?15:07
shruthimahi kwmonroe , We are charming websphere liberty charm for Z , This charm provides httpport, httpsport, and hostname.we thought to make use of http interface https://github.com/juju-solutions/interface-http but we noticed http interface will provide only httpport and hostname . So, we want know do we need to wright a new interface ?15:09
larrymi_nicbet: I didn't have to configure anything... just the export prior to bootstrap that I mentioned earlier, but it won't bootstrap at without that.15:19
larrymi_nicbet: interesting that it's disabled.. I wonder what it's keying on.15:19
nicbetlarrymi_ the cloudimage has open-vm-tools package installed. upon boot they are started, i see that from the logs. Then cloud init tries all different data sources, like EC2 store, Config ISO in CDROm, etc. one of them is OVFDatasource, which uses vmware tools to grab the info from the ovf template configuration15:21
nicbetlarrymi_ at that point it's not given anything, and logs the above line about guest customization being disabled15:21
nicbetlarrymi_ dumb question ... are you running juju against a vCenter or a vSphere ESXI?15:23
larrymi_nicbet: a vCenter15:24
larrymi_nicbet: which log are you looking at?15:24
nicbetlarrymi_ /var/log/cloud-init.log and /var/log/vmware-vmsvc.log together with https://github.com/number5/cloud-init/blob/master/cloudinit/sources/DataSourceOVF.py15:25
larrymi_nicbet: I do have the same Oct  5 19:57:55 ubuntu [CLOUDINIT] DataSourceOVF.py[DEBUG]: Customization for VMware platform is disabled.15:28
larrymi_nicbet: can probably be ignored15:28
nicbetlarrymi_ do you find a line that says it loaded cloud-init data from the OVF source?15:29
larrymi_nicbet: yes, it then goes on to Oct  5 19:57:54 ubuntu [CLOUDINIT] __init__.py[DEBUG]: Seeing if we can get any data from <class 'cloudinit.sources.DataSourceNoCloud.DataSourceNoCloud'>15:30
nicbetlarrymi_ grep 'data found' cloud-init.log prints all lines as 'no local data found from ***' where *** is the DataSource it tried15:31
rufushttps://www.youtube.com/watch?v=lm64uOErZ8w15:33
larrymi_nicbet: yeah they're there15:34
larrymi_Oct  5 19:57:55 ubuntu [CLOUDINIT] handlers.py[DEBUG]: finish: init-local/search-NoCloud: SUCCESS: no local data found from DataSourceNoCloud15:34
larrymi_Oct  5 19:57:55 ubuntu [CLOUDINIT] handlers.py[DEBUG]: start: init-local/search-ConfigDrive: searching for local data from DataSourceConfigDrive15:34
nicbetlarrymi_ is there any that says data was found?15:34
larrymi_nicbet: the logs don't say data was found specifically but they seems to indicate that it only fails to get the data locally.15:40
nicbetlarrymi_ I have a hunch that this only works with vCenter15:45
kwmonroeshruthima: i wouldn't write a new interface, just open an issue and/or provide a pull request to include an https port here:  https://github.com/juju-solutions/interface-http15:54
shruthimaok sure kwmonroe15:55
shruthimakwmonroe: May i know when will be the xenial series of current IBM-IM charm will be pulled back ? so we can push the ibm-im for Z15:59
kwmonroeshruthima: i'll try to complete that before my EOD, so roughly 7 hours from now.16:00
shruthimaoh k thanku16:01
larrymi_nicbet: yes could be. I haven't tried with ESXi host as endpoint16:11
shruthimakwmonroe: i have edited http interface provides.py http://paste.ubuntu.com/23285185/ requires.py http://paste.ubuntu.com/23285180/ locally and tested it is working fine. So is there any way to create a merge proposal for http interface ?16:14
shruthimakwmonroe: i have seen https://github.com/juju-solutions/interface-http/issues/5 similar issue is opened for http interface also16:16
kwmonroeshruthima: issue #5 would allow you to define multiple http interfaces in your charm's metadata and react to them differently (with different ports).  if that would solve your needs, you can just add a comment to issue #5 saying it would be useful to you as well.  if you instead require multiple ports for a single http interface, i think that's a separate issue.  you could run "git diff" in the directory where you made your16:48
kwmonroe edits, paste the output at http://paste.ubuntu.com, and then include that paste link in a new issue here: https://github.com/juju-solutions/interface-http/issues.16:48
shruthimaok thanks kevin16:50
holocroncould some one take a look at this pastebin and tell me what next steps to debugging might be? http://pastebin.com/7MHZV4e317:01
holocronthis, specifically: unit-ceph-1: 12:57:00 INFO unit.ceph/1.update-status admin_socket: exception getting command descriptions: [Errno 111] Connection refused17:03
=== frankban is now known as frankban|afk
spaokstub: I guess what I'm confused on is, when I try to set the services structure for when haproxy joins, if do hookenv.relation_set(relation_id='somerelid:1', service_yaml) then the joined/changed hook runs, haproxy doesn't get the services yaml, however if I do hookenv.relation_set(relation_id=None, service_yaml) then it does work, and haproxy builds the config right, but after a bit when the update-status runs it errors because relation_id isn't set17:21
stubspaok: Specifying None for the relation id means use the JUJU_RELATION_ID environment variable, which is only set in relation hooks. Specifying an explicit relation id does the same thing, but will work in all hooks. Provided you used the correct relation id.17:23
stubspaok: You can test this using "juju run foo/0 'relation-set -r somerelid:1 foo=bar'" if you want.17:25
stub(juju run --unit foo/0 now it seems, with 2.0)17:28
stub"juju run --unit foo/0 'relation-ids somerelname' " to list the relationids in play17:28
bdxhows it going everyone?17:29
bdxdo I have the capability to bootstrap to a specific subnet?17:29
bdxusing aws provider17:29
lutostagany way to specify "charm build" deps? (for instance in my wheelhouse I have cffi, which when the charm is built depends on having libffi-dev installed. I have this in the README, but wondering if that was possible to enforce programatically17:31
spaokthanks stub, I'm fairly certain I have the right relid, but when I see from the haproxy side is only the private_ip and unit id something else, with None, I get the services yaml, its very confusing17:31
spaokI'll look at trying that command, I was wondering how to run the relation-set command17:31
spaoklutostag: layers.yaml ?17:32
spaoknot 100% on that17:32
lutostagspaok: yeah, I have deps for install time unit-side like http://pastebin.ubuntu.com/23285512/, but not sure how to do it "charm build" side17:34
=== alexisb is now known as alexisb-afk
=== fginther` is now known as fginther
spaokah gotca, not sure17:37
lutostagthink I'll go with a wrapper around charm build in the top-level dir, don't need to turn charm into pip/apt/npm/snap anyways17:38
kwmonroehey icey - can you help holocron with his ceph connection refused?  http://pastebin.com/7MHZV4e317:53
holocronSomething in dpkg giving an Input/output error17:57
kwmonroelutostag: seems odd that an entry in your wheelhouse.txt would require a -dev to be installed for charm build18:00
lutostagkwmonroe: doesnt it though, it builds it locally. by compiling stuff, I guess there are c-extentsions in the python package itself18:02
lutostaglxml is another example18:02
kwmonroecory_fu_:  does charm build have runtime reqs dependent on the wheelhouse?18:02
kwmonroecory_fu_: (see lutostag's issue above)18:02
lutostag(but I can get around that one, cause we have that deb'd up)18:03
holocronkwmonroe icey going to try this http://askubuntu.com/questions/139377/unable-to-install-any-updates-through-update-manager-apt-get-upgrade18:04
cory_fu_lutostag, kwmonroe: You shouldn't need a -dev package for charm-build because it *should* be installing source only and building them on the deployed unit, since we don't know the arch ahead of time.18:04
lutostagah, so I'll need these -dev's on the unit-side, good to know, but interesting.18:05
kwmonroeholocron: that doesn't sound like a ceph problem then... got a log with the dpkg error?18:05
cory_fu_lutostag: What's the repo for cffi so I can try building it?18:06
lutostagcory_fu_: my charm or the upstream python package?18:06
cory_fu_lutostag: The charm18:07
lutostagcory_fu_: lemme push...18:07
cory_fu_Sorry, I misread cffi as the charm name18:07
holocronkwmonroe: similar messages to this: http://pastebin.com/XZ0uFfz818:08
holocronI've had make, and build-essential give the error, right now it's libdpkg-perl18:08
kwmonroeholocron: when do you see that?  on charm deploy18:10
lutostagcory_fu_: bzr branch lp:~lutostag/oil-ci/charm-weebl+weasyprint-dep18:10
holocronkwmonroe no, the charm deployed fine yesterday, it came in as part of the openstack-on-lxd bundle. i was able to create a cinder volume even.. logged in today and saw that error in my first pastebin18:11
lutostag(and I'll need to add the deps as explained too)18:11
holocroni logged into the unit and did an apt-get clean and apt-get update18:11
holocronnow it's failing with this18:11
holocronis it common practice to scale out another unit and then tear down the breaking one?18:12
holocronlike, should i just make that my default practice or should i try to fix this unit?18:12
kwmonroeholocron: common practice for for the breaking unit not to break18:12
kwmonroeespecially on some nonsense apt operation18:13
holocron:P yeah that's the ideal18:13
kwmonroeis that unit perhaps out of disk space holocron?18:13
kwmonroeor inodes?  (df -h && df -i)18:13
holocronnope, lots of space18:13
holocronlots of inodes18:14
kwmonroeholocron: anything in dmesg, /var/log/syslog, or /var/log/apt/* on that container that would help explain the dpkg failure?18:16
holocronkwmonroe probably,sorry i've got to jump to another thing now but i'll try to get back18:18
kwmonroenp holocron18:19
SivaI am trying to remove my application in juju 2.0 but it is not working18:36
SivaI put pdb.set_trace() in my code18:36
SivaNot sure if it is because of that18:36
Sivajuju remove-application does not remove the application18:36
SivaHow do I now forcefully remove it?18:37
SivaAny help is much appreciated18:37
spaokis there a decorator for the update-status hook? or do I use @hook?18:38
SivaIt is stuck at the install hook where I put pdb18:40
SivaI have the following decorator for the install hook, @hooks.hook()18:40
spaokSiva can you remove the machine?18:41
lutostagSiva: juju resolved --no-retry <unit> # over and over till its gone18:41
kwmonroeSiva: remove-application will first remove relations between your app and something else, then it will remove all units of your app, then it will remove the machine (if your app was the last unit on a machine)18:41
kwmonroeSiva: you're probably trapping the remove-relation portion of remove-application, which means you'll need to continue or break or whatever pdb takes to let it continue tearing down relations / units / machines18:42
stubSiva: The hook will likely never complete, so you either need to go in and kill it yourself or run 'juju remove-machine XXX --force'18:43
kwmonroeso lutostag's suggestion would work -- keep resolving the app with --no-retry to make your way through the app's lifecycle.  or spaok's suggestion might be faster -- juju remove-machine X --force18:43
spaokI work with containers, so I tend to do that mostly18:43
stub(and haven't we all left our Python debugger statements in a hook at some point)18:44
SivaI removed the machine, it shows the status as 'stopped'18:44
spaoktakes a sec18:44
kwmonroekeep watching.. it'll go away18:44
spaokalso rerun the remove-application18:45
SivaOK. @lutostag suggestion did the trick18:45
spaokI've notcied some application ghosts when I remove machines18:45
SivaNow it is gone18:45
SivaThanks18:46
stubspaok: yes, its perfectly valid to have an application deployed to no units. Makes sense when setting up subordinates, more dubious with normal charms.18:47
SivaOne thing I noticed is that after removing the machine (say 0/lxd/14 is removed) now when you deploy it creates the machine 0/lxd/15 rather than 1418:48
Sivais this expected?18:48
spaokya18:49
Sivasame for units as well18:49
spaokyup18:49
spaokmakes deployer scripts fun18:49
SivaWhy does it not consider the freed ones so that it is in sequence?18:49
rick_h_Siva: mainly because it makes things like logs more useful when the numbers are unique18:50
rick_h_Siva: especially as everything is async18:50
rick_h_Siva: so you can be sure any logs re: unit 15 are in fact the unit that was destroyed at xx:xx18:51
SivaOK18:51
SivaIt looks a bit weird in the 'juju status' as the sequence is  broken18:52
rick_h_Siva: yea, understand, but for the best imho18:52
spaokSiva: why I have scripts to destroy and recreate my MAAS/Juju 2.0 dev env, good to reset sometimes19:03
SivaOne question19:05
SivaI have the following entry in the config.yaml file19:05
Sivator-type:     type: string     description: Always ovs     default: ovs19:05
SivaNow when do, tor_type = config.get("tor_type")     print "SIVA: ", tor_type19:06
SivaI expect it to print the default value 'ovs' as I have not set any value19:06
SivaIt prints nothing19:06
SivaIs this a expected?19:06
spaoktor_type or tor-type?19:06
Sivaoops! my bad19:07
spaokI put underscores in my config.yaml19:07
SivaNow after making the change, I can just 'redeploy', right?19:08
spaokyou can test by editing the charm live if you wanted19:08
SivaHow do I do that?19:08
spaokvi /var/lib/juju/agents/unit-charmname-id/charms/config.yaml19:09
spaokkick jujud in the pants by19:09
spaokps aux |grep jujud-unit-charmname |grep -v grep | awk '{print $2}' | xargs kill -919:09
spaokshould cause the charm to run19:09
SivaI modified the config.yaml and restarted the jujud19:19
Sivastill prints None19:19
=== alexisb-afk is now known as alexisb
spaokSiva: in my reactive charm I just config('tor_type')19:30
spaoknot config.get19:30
spaoknot sure the diff19:30
SivaI removed the app and deployed it again19:56
Sivaconfig.get works19:56
SivaI can try config('tor_type') and see how it goes19:57
cory_fu_kwmonroe: Comments on https://github.com/juju-solutions/layer-apache-bigtop-base/pull/5020:24
iceyhey holocron kwmonroe just seeing the messages20:25
iceyto me, the line saying admin_socket: connection refused is more interesting; it looks like maybe the ceph monitor is down?20:25
holocronicey hey, i ended up tearing down the controller. I'm going to redeploy now and i'll drop a line in here if it happens again20:27
iceyholocron:  great; kwmonroeis right though, the expectation is that it wouldn't break20:28
Siva@spaok, live config.yaml change testing does not work for me20:30
=== rcj` is now known as rcj
=== rcj is now known as Guest46240
=== Guest46240 is now known as rcj
kwmonroecory_fu_: i like the callback idea in https://github.com/juju-solutions/layer-apache-bigtop-base/pull/50.  but i'm not gonna get to that by tomorrow, and i really want the base hadoop charms refreshed (which depend on the pr as is).  you ok if i open an issue to do it better in the future, but merge for now?20:40
kwmonroecory_fu_: it doesn't matter what you say, mind you.  petevg already +1'd it.  just trying to fake earnest consideration.20:41
cory_fu_ha20:41
cory_fu_kwmonroe: I'm also +1 on it as-is, but I'd like to reduce boilerplate where we can.  But we can go back and refactor it later20:42
cory_fu_kwmonroe: Issue opened and PR merged20:44
cory_fu_kwmonroe: And I'm good on the other commits you made for the xenial updates20:44
kwmonroedag nab cory_fu_!  you're fast.  i was still pontificating on the title of a new issue.  thanks!!20:48
kwmonroeand thanks for the xenial +1.  i'll squash, pr, and set the clock for 24 hours till i can push ;)20:50
kwmonroejust think, you'll be swimming when the upstream charms get refreshed.20:50
kwmonroenice knowing you20:50
kwmonroebefore you go cory_fu_.. did you see mark's note to the juju ML about blocked status?  seems the slave units are reporting "blocked" even when a relation is present.  i'm pretty sure it's because of this:  https://github.com/juju-solutions/bigtop/blob/master/bigtop-packages/src/charm/hadoop/layer-hadoop-slave/reactive/hadoop_status.py#L2220:58
kwmonroeas in, there *is* a spec mismatch until NN/RM are ready.  what's the harm in setting status with a spec mismatch?20:59
kwmonroeafaict, they'll report "waiting for ready", which seems ok to me.  unless we want to add a separate block for specifically dealing with spec mismatches, which would be some weird state between waiting and blocked to see if the spec ever does eventually match.21:01
cory_fu_kwmonroe: I think the problem is one of when hooks are triggered.  I think that what's happening is that the relation is established and reported, but the hook doesn't get called on both sides right away, leaving one side reporting "blocked" even though the relation is there, simply because it hasn't been informed of it yet21:08
kwmonroei think i don't agree with ya cory_fu_... NN will send DN info early (https://github.com/juju-solutions/bigtop/blob/bug/BIGTOP-2548/xenial-charm-refresh/bigtop-packages/src/charm/hadoop/layer-hadoop-namenode/reactive/namenode.py#L27).  but check out what's missing... https://github.com/juju-solutions/interface-dfs/blob/master/requires.py#L9521:11
cory_fu_kwmonroe: Doesn't matter.  The waiting vs blocked status only depends on the .joined state: https://github.com/apache/bigtop/blob/master/bigtop-packages/src/charm/hadoop/layer-hadoop-slave/reactive/hadoop_status.py#L3621:12
kwmonroespoiler alert cory_fu_:  it's clustername.  we don't send that until NN is all the way up, so the specmatch will be false:  https://github.com/juju-solutions/bigtop/blob/bug/BIGTOP-2548/xenial-charm-refresh/bigtop-packages/src/charm/hadoop/layer-hadoop-namenode/reactive/namenode.py#L14221:12
kwmonroecory_fu_: you crazy:  https://github.com/apache/bigtop/blob/master/bigtop-packages/src/charm/hadoop/layer-hadoop-slave/reactive/hadoop_status.py#L2221:13
cory_fu_kwmonroe: Ooh, I see.21:13
cory_fu_We should remove that @when_none line.  There's no reason for it21:13
kwmonroegreat idea cory_fu_!  if only i had it 15 minutes ago.21:14
cory_fu_:)21:14
kwmonroepetevg: you mentioned you also saw charms blocked on missing relations (maybe zookeeper?).  could it be you saw the slaves blocked instead?21:15
neiljerramAaaargh guys, would you _please_ stop making gratuitous changes in every Juju 2 beta or rc?21:16
neiljerramThe latest one that has just bitten my testing is the addition of a * after the unit name in juju status output.21:17
neiljerramBefore that it was 'juju set-config' being changed to 'juju config'21:17
neiljerramThis is getting old....21:17
petevgkwmonroe: Yes. I think that it was probably just the hadoop slave.21:23
kwmonroeneiljerram: apologies for the headaches!  but you should see much more stability in the RCs.  at least for me, api has been pretty consistent with rc1/rc2.  rick_h_ do you know if there are significant api/output changes in the queue from now to GA?21:23
kwmonroethx petevg - fingers crossed that was the only outlier21:24
petevgnp21:24
petevgfingers crossed.21:24
neiljerramkwmonroe, tbh I'm afraid I have to say that I think things have been _less_ stable since the transition from -beta to -rc.  My guess is that there are changes that people have been thinking they should do for a while, but only now that the GA is really looking likely so they think that they should get them out :-)21:25
neiljerramkwmonroe, but don't worry, I've had my moan now...21:27
kwmonroeheh neiljerram - fair enough :)21:27
neiljerramDo you happen to know what the new * means?  Should I expect to see it on every juju status line?21:28
kwmonroeneiljerram: i was just about to ask you the same thing... i haven't seen the '*'21:29
kwmonroeneiljerram: you on rc1, 2, or 3?21:29
neiljerramkwmonroe, rc3 now; here's an excerpt from I test I currently have running:21:30
neiljerram       UNIT                WORKLOAD  AGENT  MACHINE  PUBLIC-ADDRESS   PORTS  MESSAGE21:30
neiljerram       calico-devstack/0*  unknown   idle   0        104.197.123.20821:30
neiljerram       21:30
neiljerram       MACHINE  STATE    DNS              INS-ID         SERIES  AZ21:30
neiljerram       0        started  104.197.123.208  juju-0f506f-0  trusty  us-central1-a21:30
neiljerramkwmonroe, just doing another deployment with more units, to get more data21:32
kwmonroehmph... neiljerram i wonder if that's an attempt to truncate the unit name to a certain length.  doesn't make sense in your case, but i could see 'really-long-unit-name/0' being truncated to 'really-long-u*' to keep the status columns sane.21:32
kwmonroejust a guess neiljerram21:32
kwmonroeand at any rate neiljerram, if you're scraping 'juju status', you might want to consider scraping 'juju status --format=tabular', which might be more consistent.21:33
neiljerramkwmonroe, BTW the reason this matters for my automated testing is that I have some quite tricky code that is trying to determine when the deployment as a whole is really ready.21:33
kwmonroeugh, not right21:33
kwmonroesorry, i meant 'juju status --format=yaml', not tabular21:34
neiljerramkwmonroe, yes, I suppose that would probably be better21:34
neiljerramkwmonroe, Ah, it seems that * means 'no longer waiting for machine'21:36
kwmonroeneiljerram: you sure?  i just went to rc3 and deployed ubuntu... i still see:21:44
kwmonroeUNIT      WORKLOAD  AGENT       MACHINE  PUBLIC-ADDRESS  PORTS  MESSAGE21:44
kwmonroeubuntu/0  waiting   allocating  0        54.153.95.194          waiting for machine21:44
neiljerramkwmonroe, exactly - because the machine hasn't been started yet21:44
kwmonroeoh, nm neiljerram, i should wait longer.. you said the '*' is....21:44
kwmonroeright21:44
kwmonroei gotta say, intently watching juju status is right up there with the birth of my first child21:45
kwmonroeneiljerram: i can't get the '*' after the machine is ready, nor using a super long unit name.  i'm not sure where that's coming from.21:51
kwmonroeUNIT                       WORKLOAD     AGENT      MACHINE  PUBLIC-ADDRESS  PORTS  MESSAGE21:51
kwmonroereally-long-ubuntu-name/0  maintenance  executing  1        54.153.97.184          (install) installing charm software21:51
kwmonroeubuntu/0                   active       idle       0        54.153.95.194          ready21:51
neiljerramkwmonroe, do you have rc3?21:52
kwmonroeneiljerram: i do... http://paste.ubuntu.com/23286448/21:52
neiljerramkwmonroe, curious, I don't know then.  I'm also using AWS, so it's not because we're using different clouds.21:55
kwmonroeneiljerram: i'm aws-west.  if you're east, it could be signifying the hurricane coming to the east coast this weekend.21:56
neiljerramkwmonroe, :-)21:56
kwmonroerc3 backed by weather.com ;)21:56
kwmonroeneiljerram: care to /join #juju-dev?  i'll ask the core devs where the '*' is coming from21:57
neiljerramkwmonroe, sure, will do21:57
kwmonroefor anyone following along, the '*' denotes leadership22:13
magicaltrouti thought you were the leader kwmonroe22:29
kwmonroethat's kwmonroe* to you magicaltrout22:29
magicaltroutTexas' own Idi Amin22:30
kwmonroemagicaltrout: you still in souther california?  or did you get back to the right side of the atlantic?22:30
magicaltrouthehe22:31
magicaltrouti'm back in the motherland for now22:31
magicaltroutbeen instructed to report to Washington DC on the 28th November22:31
magicaltroutso not for long22:31
magicaltroutalthough i was hoping a nice sunny jaunt to ApacheCon EU was gonna be the last trip of the year22:32
kwmonroemust be hard being so popular magicaltrout22:34
magicaltrouttrololol22:34
magicaltroutwhatever22:34
magicaltroutkwmonroe: is cory_fu_ staying in your basement?22:34
kwmonroemagicaltrout: two things you should know about central Texas:  1) it's all bedrock; no basements.  2) cory_fu_ was a track lead at the summit; he stays at the Westin.22:35
magicaltroutWestin? and you lot dumped us at the Marriot?22:37
magicaltroutI need to upgrade22:37
magicaltroutget me a real community sponsor!22:37
kwmonroeapply for track lead in Ghent next year ;)22:37
kwmonroecomes with a bright orange shirt.. wearable anywhere!22:38
magicaltroutthey did look nice....... sadly I'll be too drunk22:38
magicaltroutoh22:38
magicaltroutthat never stopped you guys22:38
magicaltrouti could lead the "werid big data - container crossover"22:39
magicaltrouts/werid/weird22:39
kwmonroepretty sure you're already leading that22:40
magicaltrouthehe22:40
kwmonroei don't understand why mapreduce wasn't enough for you22:40
magicaltroutyeah i've been tapping up the mesos mailing list the last few days trying to figure out what needs to be done to get LXC support in their container stack22:40
kwmonroeeverything can be solved with mapred.  and if it can't, map it again.22:41
magicaltrouti'm not a C programmer though so it might take me a while unless my IDE-fu wins22:41
kwmonroetry emacs22:43
magicaltroutI use emacs actually kwmonroe :P22:44
magicaltroutjust not for coding :)22:44
kwmonroesure magicaltrout.. it's also good as a desktop environment and for playing solitaire.22:44
magicaltroutexactly see22:44
magicaltroutyou know kwmonroe22:44
magicaltroutyou know22:44
magicaltroutsorry.... kwmonroe*22:45
kwmonroeso magicaltrout*, how can we help you with apachecon seville?  you've got some drill bit work i pressume?22:45
magicaltroutyeah. Plan do to a bigtop & drill demo22:47
magicaltroutget some stuff in a bundle so willing volunteers can test etc22:47
magicaltroutif that latest RC changelog isn't a lie.... drill will even work in LXC which is a bonus22:47
kwmonroeroger that magicaltrout.. i'll volunteer!22:47
magicaltroutfor the first time ever, the Juju talk is the easier of the two. I'll knock something together next week and we can iterate over it22:48
magicaltroutwe've got a month or so22:49
magicaltrouttry and not leave it to the last minute for a change22:49
kwmonroe!remindme 1 month22:49
magicaltroutyeah, well for ApacheCon NA i was writing the talks on the plane22:49
magicaltroutso you know....22:49
magicaltrouthow bad can it be?22:49
kwmonroeit could be as bad as a 6 out of 10. but no worse.22:50
magicaltroutthanks for the reassurance22:51
bdxcmars: https://github.com/cmars/juju-charm-mattermost/pull/2/files23:24
cmarsbdx, why does it need nginx embedded?23:25
cmarsbdx, the systemd support is nice23:26
bdxcmars: so I can give my users a clean fqdn with no ports hanging on the end23:26
bdxcmars: plus, mattermost docs suggest it23:27
cmarsbdx, ok, that makes sense23:27
cmarsbdx, is it worth exposing the mattermost-port at all then?23:27
cmarsmight just leave it fixed and local only..23:28
bdxok23:28
cmarsbdx, also, let's remove trusty from the series in metadata23:29
bdxI thought i did ... checking23:29
cmarsbdx, ah, you did, my bad23:30
bdxcmars: there ya go23:32
lutostaghow would I fix 2016-10-06 23:30:28 ERROR juju.worker.dependency engine.go:526 "leadership-tracker" manifold worker returned unexpected error: leadership failure: lease manager stopped... which daemon should I kick on the unit?23:32
lutostag2.0 beta7 (can't tear down and redeploy for a little while unfortunately)23:33
cmarsbdx, thanks, i'll have to test this out, but i could publish it soon. probably need to update the resource as well23:36
bdxcmars: totally, I was thinking of adding a tls-certificates interface, so if a user desired to have ssl, they could just relate to the easyrsa charms23:39
cmarsbdx, oooh nice!23:39
bdxactually, I feel that functionality should be a part of the nginx charm though23:40
cmarsbdx, do we have a LE layer yet?23:40
cmarsthat'd be really nice for a private secure mattermost23:40
bdxstokachu: ^^23:40
bdxcmars, stokachu: https://jujucharms.com/u/containers/easyrsa/223:40
bdxLE layer?23:40
cmarslet's encrypt23:41
bdxoooo23:41
bdxcmars: there should be23:42
bdxI know we have discussed it23:42
=== scuttlemonkey is now known as scuttle|afk
=== scuttle|afk is now known as scuttlemonkey
stokachulutostag: switch to the admin controller and ssh into machine 023:57
stokachulutostag: then just pkill jujud and it'll restart and pick back up23:57
stokachubdx: sorry some extra context?23:57
stokachuah i see nginx23:57

Generated by irclog2html.py 2.7 by Marius Gedminas - find it at mg.pov.lt!