[02:27] <codezomb> Is anyone aware of a way to autoscale charms? For example, running my workers needs to scale based on metric X. I could probably do something with SQS / Lambda to hit the juju api, but I thought I would ask if anyone knew of any existing solutions.
[02:45] <blahdeblah> ybaumy: I'm curious to know what part of the system is producing the error, since it implies that it's more than 5 minutes, not 2 seconds.  Might be best to log a bug about that.
[03:09] <jrwren> codezomb: https://jujucharms.com/charmscaler/
[03:24] <ybaumy> Sun Mar 19 03:24:04 UTC 2017
[03:24] <ybaumy> Sun Mar 19 04:24:04 CET 2017
[03:24] <ybaumy> Sun Mar 19 03:24:05 UTC 2017
[03:24] <ybaumy> Sun Mar 19 04:24:05 CET 2017
[03:24] <ybaumy> Sun Mar 19 03:24:06 UTC 2017
[03:24] <ybaumy> Sun Mar 19 04:24:06 CET 2017
[03:24] <ybaumy> Sun Mar 19 03:24:06 UTC 2017
[03:24] <ybaumy> Sun Mar 19 04:24:06 CET 2017
[03:24] <ybaumy> Sun Mar 19 03:24:07 UTC 2017
[03:24] <ybaumy> Sun Mar 19 04:24:07 CET 2017
[03:24] <ybaumy> blahdeblah:
[03:24] <ybaumy> maybe the CET UTC ?
[03:25] <ybaumy> but that shouldnt matter right
[03:25] <ybaumy> CET is on the host i am executing the command
[03:29] <ybaumy> nope thats not it
[03:29] <ybaumy> changed tzdata to UTC
[03:29] <ybaumy> same error
[03:33] <ybaumy> and there is no error when i print unixtime with +%s
[03:34] <ybaumy> hmm im out of ideas
[03:34] <blahdeblah> ybaumy: 7.927 hrs; doesn't sound like anything to do with TZs
[03:34] <ybaumy> but what can it be
[03:34] <blahdeblah> ybaumy: Can you pastebin your juju status --format=yaml (obfuscated if necessary)?
[03:35] <ybaumy> http://pastebin.com/Qhr8JRea
[03:37] <blahdeblah> ybaumy: and you can't deploy any charms at all?
[03:37] <ybaumy> wait a sec. let me try and add-machine + unit
[03:38] <blahdeblah> ybaumy: But you don't appear to have any services to add units to
[03:38] <blahdeblah> s/services/applications/
[03:39] <blahdeblah> What does juju status --format=tabular show?
[03:39] <ybaumy> blahdeblah: that the output of the controllers
[03:39] <ybaumy> blahdeblah: im getting the same error when switching to model default
[03:39] <ybaumy> blahdeblah: with add-machine --constraints tags=prodcloud,node
[03:39] <blahdeblah> ybaumy: So in which model were you seeing the "Expired timestamp..." error from above?
[03:40] <ybaumy> blahdeblah: default or it seems it doesnt matter
[03:40] <ybaumy> blahdeblah: its broken
[03:41] <ybaumy> ii  juju                               1:2.2~alpha1-0ubuntu1~16.04.1~juju1 all          next generation service orchestration system
[03:41] <ybaumy> ii  juju-2.0                           1:2.2~alpha1-0ubuntu1~16.04.1~juju1 amd64        Juju is devops distilled - client
[03:41] <blahdeblah> My guess is something to do with replication on the controllers, but I'm not expert enough in mongodb to advise on that.
[03:41] <ybaumy> maybe i will open a bugreport
[03:51] <ybaumy> will downgrade later to stable on my testbox
[03:51] <ybaumy> if that is even possible
[03:51] <ybaumy> i opened up a bug
[04:01] <ybaumy> i hope i have something ready and working by tuesday. i have to hold a presentation about the current status of the POC :D .. broken here .. maas bug. problems with openstack in general
[04:01] <ybaumy> that will be a mess
[04:07] <ybaumy> im gonna watch some seinfeld. i need my dosis of george in order to cope with reality :D
[05:12] <ybaumy> hmm on my other environment i have the same error
[05:13] <ybaumy> 2.0.2-0ubuntu0.16.04.1
[05:13] <ybaumy> and thats the version
[05:13] <ybaumy> so it must be something else
[05:14] <ybaumy> the strange thing is
[05:14] <ybaumy> on my veeam backup server the jobs failed with an error that the time is out of sync between client and server
[05:18] <ybaumy> lol
[05:18] <ybaumy> i found it
[05:18] <ybaumy> on my maas server there was a time set to 1400 and its now 0618
[05:18] <ybaumy> i installed ntp there and now everythings working
[05:24] <ybaumy> but veeam backup is still broken
[05:48] <ybaumy> one of my esx servers had no ntp config
[05:48] <ybaumy> damn
[06:43] <codezomb> jrwren: I couldn't get that to be stable, it also required a paid subscription to scale to anything beyond 4 nodes :/
[13:56] <ybaumy> damn