[00:00] <Micromus> if each one takes days or weeks just to test...
[00:00] <LinStatSDR> Micromus: Have you thought about using VMWare Horizon 6?
[00:00] <LinStatSDR> I'm not sure about what you're current production system does but
[00:00] <Micromus> No, I have looked at pricing of vmware vsphere and it's extremely expensive
[00:00] <Micromus> currently we have the cheapo vmware essentials for 3 nodes
[00:01] <Micromus> which is basically free at $200 a year or something
[00:01] <LinStatSDR> Do you deploy or do any Virtual Desktops?
[00:01] <Micromus> No, I work for an ISP
[00:01] <LinStatSDR> Oh okay, scratch that then.
[00:02] <Micromus> So we don't need that much performance, but we need the safety of clustering and ease of deploying new stuff, and saved time by automation
[00:03] <LinStatSDR> Hmm, if you don't mind me asking, what are you clustering and/or deploying / automating?
[00:03] <Micromus> So definetely, the cloud is the future, but right now it's hard to pick which vendor/implementation to go with
[00:04] <LinStatSDR> Just SDN stuff? Like deploying new networks using Openstack?
[00:04] <sarnold> Micromus: one neat thing is that juju can deploy multiple 'levels' of your system -- you could use juju to deploy openstack on your hardware and then manage all the vms in openstack by hand, or you could deploy openstack by hand and then use juju on top, or use juju for both the openstack deployment -and- the services that run on top of openstack
[00:04] <LinStatSDR> i.e OpenStack Neutron.
[00:04] <Micromus> A range of services, caching DNS for customers, authorative DNS for domains, webservers, customer e-mail platform, voip platforms, broadband provisioning systems, network monitoring systems, and systems monitoring the other systems
[00:05] <Micromus> sarnold: I like that
[00:05] <Micromus> I dislike the all-or-nothing approach
[00:06] <Micromus> Risk is lower when stuff is modularized
[00:06] <sarnold> Micromus: yeah :)
[00:06] <sarnold> Micromus: .. and it's also neat that if you like the tool, it can work for you on multiple kinds of tasks, too
[00:06] <LinStatSDR> Well, technically it's not all or nothing. The features are there but if you're looking for a minimalistic approach you could always go knee deep into linux from scratch
[00:06] <Micromus> And usually, when vendors bundle stuff together like that, it's because there is some stuff you cannot live without, but some of the stuff is utter shit
[00:07] <sarnold> Micromus: oh yes, so true..
[00:07] <LinStatSDR> Micromus: Yep, nothing like having to have bloatware on production systems.
[00:07] <LinStatSDR> Always a good time...
[00:08] <Micromus> So I'm always wary against "all-or-nothing" stuff, not because it's always bad, it's just likely to be bad based on prior experience and statistics :P
[00:08] <LinStatSDR> That's why we have Dev systems
[00:10] <Micromus> Yes, and the time to get the dev system up and running itself is not so costly that you basically have to go further with the project even if the pilot goes south, which very often is the case, sadly
[00:10] <LinStatSDR> I always go for the minimalistic approach but sometimes I have issues with having such an environment when/if I need to scale out the services.
[00:11] <LinStatSDR> Micromus: The only thing that's costly is them paying you for all the time do troubleshoot, install, deploy and manage it.
[00:11] <Micromus> I spent a week writing a tutorial on setting up opentsdb when going from dev to production, https://peritusconsulting.github.io/articles/2014-06-02-next-generation-monitoring-using-opentsdb.html
[00:12] <Micromus> They also pay me to do the research/testing of the systems before install and deploy ;)
[00:13] <Micromus> So this is actually my step 2 of that, after skimming the webpages ;)
[00:13] <LinStatSDR> I'm in the same boat, I ONLY do R&D now but I have others do the documenting for me.
[00:13] <Micromus> So far; website awesome, community seems good (or I might just be lucky now), next step is downloading and installing later this week
[00:13] <LinStatSDR> I have a wiki, because no one ever reads the documentation
[00:14] <Micromus> Yep, that tutorial is now broken, due to new incompatible versions of stuff, so not worth the time spent from a pure documentation point of view
[00:14] <LinStatSDR> Micromus: It's really a niche community here.
[00:14] <sarnold> Micromus: you are a touch lucky now, irc isn't always this responsive; the juju team does a good job keeping up on questions in askubuntu though, so when viewed as an average-per-day it works out alright, but there are times when irc is .. thin.
[00:15] <Micromus> Internally we use wiki as well
[00:15] <LinStatSDR> sarnold: We get a lot of OpenStack / Juju questions in #Ubuntu and #Ubuntu-Offtopic
[00:16] <Micromus> sarnold: hehe, the opentsdb channel for example, usually goes weeks between any activity, so I'm guessing it's all relative
[00:16] <LinStatSDR> But please, no #ubuntu-offtopic questions. That's the place to relax
[00:16] <sarnold> LinStatSDR: oh, interesting, I hadn't heard of #ubuntu-offtopic, and #ubuntu is like drinking from a firehose, so I'm not often there :) hehe
[00:16] <sarnold> Micromus: nice, hehe
[00:16] <LinStatSDR> #ubuntu is a scary place with 1,800 users in it. The OT channel has like 220
[00:17] <Micromus> I usually dont go to chat or forums to ask questions, I can usually google my way to the source and solutions pretty quickly
[00:18] <Micromus> I may throw the occational fit when I spent hours on some problem that is just stupid though  :o
[00:18] <sarnold> Micromus: this seems like something you might enjoy reading about; I don't know how far along alan is yet.. http://linux-ha.org/source-doc/assimilation/html/index.html
[00:20] <Micromus> Hah, I know, discovered that a few days ago actually
[00:20] <Micromus> Chatting with Alan right now on #assimilation :)
[00:20] <sarnold> Micromus: hah, nice :)
[00:21] <LinStatSDR> Micromus: Just be glad you're not using Lync for the voip and video traffic
[00:21] <sarnold> it could just be my alanr fanboyism, he gives a hell of a good talk, no matter what he is talking about..
[00:21]  * LinStatSDR shutters
[00:21] <Micromus> Just wondering, why did you think I would enjoy reading about it?
[00:22] <Micromus> Because using graph databases for CMDB etc is something I have been thinking about for about a year
[00:22] <sarnold> Micromus: you sound curious and your opentsdb read like you enjoyed learning about it :) hehe
[00:22] <Micromus> spot on ;)
[00:23] <sarnold> \o/
[00:25] <Micromus> Anyway, it's 1.20 in the morning here and i should be asleep loong ago, thanks for the intro, I'll stick around and hopefully try some stuff later this week :D
[00:25] <LinStatSDR> Always good to be excited about new technology. I remember a deadzone for a few years
[00:25] <sarnold> gnight Micromus, see ya later :)
[00:25] <LinStatSDR> Night Micromus, nice talking to you
[00:27] <Micromus> :)
[00:32] <LinStatSDR> sarnold: You wouldn't happen to know if this channel is logged would you?
[00:48] <sarnold> LinStatSDR: yeah: http://irclogs.ubuntu.com/2014/12/09/%23juju.html
[00:48] <LinStatSDR> That's literally dated tomorrow
[00:48] <LinStatSDR> I see. I'm a fool. Don't mind me
[00:49] <sarnold> hah, so it is :)
[00:49] <sarnold> I was just surprised that it was so far behind..
[00:49] <LinStatSDR> It starts at 00:00 so
[00:49] <LinStatSDR> It's as intended
[00:55] <LinStatSDR> Thank you sarnold. I appreciate it.
[00:56] <sarnold> LinStatSDR: sure thing :)
[02:36] <jose> stub: ping
[02:36] <stub> jose: pong
[02:37] <jose> stub: in which moments would you expect an IP change?
[02:37] <stub> jose: I don't know. I had a query here recently from someone whose postgresql unit had an IP address change, and we needed to manually trigger some hooks to get things back online.
[02:38] <jose> stub: if you want to do such checks, you could add a sentinel file to your charm
[02:38] <jose> so do 'unit-get public-address' or private-address and echo/cat it into a file, then to check if it's changed compare the output of the same command with the info on the file
[02:38] <LinStatSDR> stub: lease expire or user command?
[02:39] <stub> LinStatSDR: I have no idea. I'm usually only working in the local provider, so it doesn't happen here :)
[02:39] <LinStatSDR> Ah.
[02:40] <LinStatSDR> stub: last time I had that happen to me someone was trying to change it manually and I had to trigger a hook to get it to work
[02:40] <stub> jose: last I heard, the whole system would fall over since the controller node would lose track of the units. But if that has been fixed, and the controller node follows the ip address changes of the units, then my charm needs to handle that too.
[02:41] <jose> stub: unfortunately the bootstrap node isn't aware and has no way to check IP changed
[02:41] <LinStatSDR> I don't mind hooking but don't want a db full of misc. ips
[02:41] <stub> jose: That answers my question then :)
[02:41] <jose> stub: IF you want some info on how to get that fixed, lemme grab a link for you
[02:41] <LinStatSDR> MaaS can make the node aware.
[02:42] <LinStatSDR> iirc
[02:42] <jose> LinStatSDR: yeah, but bare in mind we're also dealing with aws, hpcloud, local, manual and many many more
[02:42] <jose> stub: http://blog.dasroot.net/reconnecting-juju-connectivity/
[02:43] <LinStatSDR> jose: Yeah, that's true.
[02:45] <stub> I'm sure it will happen one day. I think all providers provide some sort of unchanging key for each container.
[02:46] <stub> (cloud providers, not juju providers)
[02:46] <LinStatSDR> I was going to say stub lol
[03:01] <jose> stub: seen that last email? \o/
[03:02] <stub> yeah, and responded ;)
[03:03] <jose> \o/
[03:03] <jose> looks like my watch was faster than my PC fetching it
[15:03] <jrwren> I tried to follow http://blog.juju.solutions/cloud/juju/2014/11/26/debug-hooks-ext-plugin.html and no where does it say to add ~/.juju-plugins to PATH. To make it work, I had to add it to PATH. Is this intentional? Can post be updated or at least "read the README for more"?
[15:28] <marcoceppi> jrwren: you must follow the instructions of the plugins
[15:28] <cory_fu> jrwren: You're right that I should have mentioned that in the blog post.  There's also a requirement on python-jujuclient that I didn't mention.
[15:28] <marcoceppi> jrwren: https://github.com/juju/plugins#fetch-source
[15:29] <marcoceppi> cory_fu: you may just want to link to the repository readme, I'm about ot write one of those lame curl | bash scripts for the plugins archive
[15:29] <marcoceppi> to do an interactive install or whatever
[15:30] <cory_fu> Ok.  Should I bother submitting a PR to add the python-jujuclient requirement to the README, then?
[15:30] <marcoceppi> cory_fu: go ahead
[15:30] <cory_fu> Ok
[15:30] <marcoceppi> no idea when this pipe dream of mine will come true
[15:30] <marcoceppi> heh, get it, PIPE dream
[15:30] <marcoceppi> man, I crack myself up
[15:31] <cory_fu> ...
[15:31] <jrwren> *groan*
[15:31]  * marcoceppi sees himself out
[15:46] <cory_fu> marcoceppi: PR for plugins and blog are submitted
[15:46] <mbruzek> cory_fu: marcoceppi I can take a look
[15:46] <cory_fu> mbruzek: Thanks
[17:06] <lazyPower> :( nobody ack'd my big data post
[17:06] <lazyPower> https://github.com/juju-solutions/juju-solutions.github.io/pull/16
[18:26] <marcoceppi> lazyPower: /me nacks
[18:26] <marcoceppi> ;) <3
[18:26] <lazyPower> huzzahhh
[18:27] <marcoceppi> not sure I agree with voice of artcile
[18:27] <marcoceppi> but that can be tuned later
[20:06]  * hatch waves at stimms
[20:07] <stimms> howdy
[20:08] <stimms> so we're looking at deploying a bunch of ubuntu machines to "the field". The field being a bunch of drilling rigs. They're going to be disconnected for the most part or connected over a really slow line.
[20:09] <hatch> stimms: hey I pm'd you :)
[20:09] <stimms> we would defer updates to when the devices are connected over a good connection
[20:09] <stimms> probably once every 4-6 weeks
[20:10] <hatch> we can take this to pm if you prefer
[20:11] <stimms> no, no nothing secret or proprietary here
[20:11] <hatch> ok sounds good :)
[20:11] <hatch> then plz continue :)
[20:12] <stimms> yeah so I was wondering how good of a fit this scenario would be
[20:12] <stimms> for the most part all the devices will be the same but they would be in different states of update
[20:13] <hatch> well typically Juju is used to orchestrate services in an environment - deploy, scale-up/down etc
[20:13] <hatch> so juju would be really nice for actually deploying the applications to the various machines
[20:14] <hatch> and for updating them when they come online
[20:14] <stimms> that was my impression
[20:14] <hatch> now the question is, are we talking about multiple physical machines?
[20:14] <hatch> or just a single one that will be virtualized
[20:14] <stimms> Yes, they're little micro ATX thingies
[20:15] <hatch> ahh ok so you also probably want to take a look at MAAS
[20:15] <hatch> as well
[20:15] <stimms> that being said we could deploy a core os to the physical device and then layer a VM on top
[20:16] <hatch> hmm
[20:16] <stimms> or deploy into a container
[20:16] <sarnold> juju is neat stuff, but if these machines are so .. disconnected .. I don't really see the need for it
[20:16] <hatch> stimms: so I suppose there are a few different ways to do this
[20:16] <stimms> being able to manage the machines and see what versions they are on from a central location would be handy
[20:17] <hatch> it's actually a pretty interesting problem
[20:17] <stimms> we live in an age of polyglot deployment
[20:17] <hatch> indeed
[20:17] <sarnold> it is interesting, nearly everything assumes ubiquitous networking :)
[20:18] <hatch> stimms: I'm trying to think of a good approach for the intermittent connectivity
[20:18] <hatch> I'm not sure if machines can pop in/out of an environment
[20:18] <hatch> assuming having a single env for all the rigs
[20:18] <stimms> how do you deal with network partitions?
[20:18] <hatch> you could of course have a new env for each rig
[20:18] <hatch> which would probably be best
[20:19] <sarnold> from hearing about this so far, I think something like canonical's landscape might make more sense, but I know part of the landscape client sends ping messages back to the control server on a fairly regular basis; you probably wouldn't want that much overhead on a modem or worse, satellite link
[20:19] <hatch> stimms, so each rig would have it's own juju environment which, when connected, you'd be able to log into and view the environment/machines/services etc
[20:21] <hatch> stimms: I'm just thinking here...
[20:21] <hatch> :)
[20:22] <hatch> I love juju for the absolute trivial deployment story for services
[20:23] <hatch> so if you could script the deployment of the applications that need to be installed on each machine you could easily deploy them to each rig's environment with a couple commands
[20:24] <hatch> then when it came time to update any of those services it would also be as trivial as running a single juju command per environment
[20:24] <hatch> stimms: so I suppose the question is - would you want to script the deployment of the services you need to install?
[20:25] <stimms> oh for sure I would want it scripted
[20:25] <stimms> I'm not manually deploying things to 100 boxes
[20:25] <hatch> lol no, you probably wouldn't want to do that
[20:26] <stimms> this isn't windowsland
[20:26] <hatch> so yes I'd probably use a MAAS + Juju setup
[20:26] <hatch> I think you need a minimum of 2 machines to run MAAS
[20:26] <stimms> lanscape looks like it would also be a good tool for us
[20:27] <hatch> MAAS would handle getting the hardware set up, os installed, registered for Juju's use
[20:27] <hatch> and yes landscape would be very helpful as well
[20:27] <stimms> and MAAS is (wasn't it the thing that made you walk faster in Mech Warrior 2?)
[20:27] <hatch> lol
[20:28] <hatch> https://maas.ubuntu.com/
[20:30] <stimms> Metal as a Service is a great name
[20:30] <hatch> haha I know right?
[20:31] <stimms> although it is a bit close to terminator for me to be completely comfortable with it
[20:31] <stimms> well this gives me some avenues to explore, thanks!
[20:32] <hatch> stimms: yeah there are so many different approaches to this
[20:32] <hatch> it will be a fun project for sure
[20:34] <stimms> I'll be delighted if I get to work on it for more than an hour a week
[20:36] <hatch> darn - well might be best to start looking at something like my Ghost blog charm
[20:36] <hatch> it's written in JS but is very basic
[20:36] <hatch> so it'll give you an idea of how a charm is structured
[20:36] <hatch> https://github.com/hatched/ghost-charm
[20:36] <stimms> every time I talk to you I hear about that charm
[20:36] <hatch> lol
[20:36] <lazyPower> stimms: he's the author, dont let it jade you :)
[20:37] <hatch> haha shush
[20:37] <lazyPower> stimms: alternatively - if you have charm-tools installed there are several templates available for boilerplate based exploration
[20:37] <hatch> it's just very basic and doesn't rely on any external libs or anything
[20:37] <hatch> so easy for new ppl to understand
[20:37] <hatch> and yes that too
[20:37] <lazyPower> to date there are 2 flavors of python template, a chef template, and a bash template - with more on the way and a possible restructuring of how we do charming once charm-helpers is a solified project
[20:38] <lazyPower> *solidified - it moves pretty quickly at present, we're adding and extending it all the time.
[20:38] <hatch> lazyPower: deploys environments all day so he would also be a great resource :)
[20:38] <lazyPower> hatch: your work to have a js based charm would make an excellent charm template btw
[20:38] <lazyPower> if you want help doing that lmk
[20:38] <hatch> lazyPower: until node has a synchronous exec() command js charms are funky to write
[20:39] <lazyPower> and thats newcommer friendly? :P
[20:39]  * lazyPower ducks
[20:39] <hatch> hahaha - well for js dev's it is
[20:40] <hatch> lazyPower: do you know of a really basic python charm?
[20:41] <lazyPower> hatch: the one created by charm create -t python_basic
[20:41] <hatch> lazyPower: we should commit that to a repo somewhere for people to explore
[20:42] <hatch> I could probably do it
[20:42] <hatch> but then it'll be under my gh namespace :)
[20:42] <hatch> and we don't want that :P
[20:42] <lazyPower> hatch: actually we have a long running card to do many different avenues of going from zero to charm along the different devel paths
[20:42] <lazyPower> if you're a chef based user, bash, python, etc.
[20:42] <hatch> oh nice
[20:42] <lazyPower> then having that be part of the charm author docs
[20:42] <lazyPower> but time has been short with all the demo's and other fringe work coming up standing in between the docs and the eco team
[20:43] <lazyPower> some day, we'll perfect cloning and get more evilnic's
[20:44] <Micromus> Is there a separate channel for MAAS stuff?
[20:45] <Micromus> Would provisioning a glusterfs SAN with MAAS/juju be possible? And would it be a good idea?
[20:46] <sarnold> Micromus: there is a #maas
[20:46] <sarnold> Micromus: I didn't much care for the look of glusterfs when I read the code, but I haven't actually run it myself
[20:47] <Micromus> I've looked at ceph, but it's a pain to set up, and probably worse to operate
[20:56] <jcastro> we do have a glusterfs charm
[20:57] <lazyPower> it may be a bit long in the tooth - its still targeting precise
[20:58] <hatch> stimms: if you have any more questions or anything feel free to ping me or ask here