[00:59] <lazyPower> hnix: o/
[01:00] <lazyPower> there's some interesting behavior at play here with the rails charm. In its current incantation - rbenv is being used as a "wrapped command" - meaning it only exists within teh context of the charm hooks. This has some implications with the current documentation, and usage patterns
[01:00] <lazyPower> hnix: what i suggest is attempting to do the following: "juju run rake db:migrate" instead of juju ssh - if that fails, there is a rewrite in progress of moving the rails charm from rvm to rbenv
[01:01] <lazyPower> and i dont have an ETA on when I will be finished with that, however - all contributions/testing are welcome :)
[01:04] <hnix> LazyPower : Thank u !
[01:05] <lazyPower> hnix: if you're interested in following along with that migration effort i have a github branch you can watch - https://github.com/chuckbutler/rails-charm/tree/rbenv_migration
[01:08] <hnix> im a new GoLang Programmer about 2 weeks ago. i want really to contribute on an effort to extend my programming skills .... this is way im playning with this project ( #juju ) so i want learing basics as soon as possible then contribute activly in such as project or docker
[01:09] <lazyPower> We welcome all the help we can get hnix. Are you looking to learn how juju works then contribute to juju-core?
[01:09] <hnix> lazypower : i will check this now and see ! i appreciate any advice
[01:11] <hnix> lazypower : Yeah i want to contribute to juju-core
[01:11] <jw4> hnix: welcome - have you checked out CONTRIBUTING.md in the juju-core repo?
[01:11] <thumper> hnix: o/
[01:11] <thumper> hnix: what sort of contributions are you interested in offering?
[01:11] <thumper> hnix: what floats your boat?
[01:16] <hnix> lazypower : yeah i checked it CONTRIBUTING.md + i cloned the git repo ... any taks that i can do i will do it ... fo now i dont know what but, along my learning and Juju discovery i will figure out what type of contribute i can
[01:16] <lazyPower> hnix: jw4 and thumper are both core contributors. They're helpful in finding bitesized bugs to fix :)
[01:17] <lazyPower> we have a fairly large landscape in terms of contributions - juju-core is great if you want to sharpen your go, or if you're looking for general contributions we have a large collection (260 +) of charms (services) that are always up for contributions of any kind, between docs to actual orchestration logic contributions
[01:18] <lazyPower> hnix: if you're main focus is to contribute to core, reading the Contributing doc and joining #juju-dev would be my recommendation. Learn the lay of the land - find your first bitesized bug and dive in.
[01:18] <jw4> hnix: bit of a firehose, but this is the list of high priority bugs in juju-core https://bugs.launchpad.net/juju-core/+bugs?search=Search&field.importance=High&field.status=New&field.status=Incomplete&field.status=Confirmed&field.status=Triaged&field.status=In+Progress&field.status=Fix+Committed
[01:20] <hnix> Whoaa! take it easy men ... im a new Golang dev. you scaring me  +_+
[01:20] <jw4> hnix: once you're able to build and run all the tests you should be ready to start tackling bugs ;)
[01:21] <jw4> hnix: lol - I think lazyPower may know of some charms that are written in Go (right lazyPower ?) those might be bite sized too
[01:21] <lazyPower> we dont have charms in go atm
[01:21] <lazyPower> but it would be an interesting take on charming
[01:21] <hnix> is this a fast track to dive in a new lang.
[01:21] <jw4> lazyPower, hnix strike that then
[01:21] <sarnold> ISTR someone gave that a shot a few months back..
[01:21] <lazyPower> natefinch i think
[01:21] <lazyPower> or at least he was talking about it
[01:22] <lazyPower> my problem with a golang charm is how do you debug a binary when things go sideways?
[01:22] <jw4> hnix: there is some good go code in juju-core, but it is fairly sophisticated - depends on how big of a bite you want to take I guess
[01:22] <sarnold> lazyPower: yeah, I wouldn't want to maintain it myself :)
[01:22] <jw4> lazyPower: +1
[01:22] <lazyPower> jw4: would be good if we had some small things like localizations, or output corrections.
[01:22] <lazyPower> tag those as bitsized - as we have a page that links to a zero output bug tracker list
[01:23] <lazyPower> https://juju.ubuntu.com/resources/easy-tasks-for-new-developers/ <-- the juju-core list is empty
[01:23] <jw4> lazyPower: yeah - that's a great idea.   Hmm...
[01:23] <sarnold> this one feels like it's probably approachable by a new dev https://bugs.launchpad.net/juju-core/+bug/1420057
[01:23] <mup> Bug #1420057: agents see "too many open files" errors after many failed API attempts <juju-core:Triaged> <https://launchpad.net/bugs/1420057>
[01:24] <hnix> Lazypower : so  trynig out with Charms is easy to beging with ? than the juju-core
[01:24] <sarnold> strace agents to find out which operations open but don't close file descritprors...
[01:24] <hnix> it it the case ( advice )
[01:24] <lazyPower> hnix: charms aren't go-lang specific, but they are a great place to start contributing
[01:24] <lazyPower> sarnold: you love pain
[01:25] <sarnold> lazyPower: I figured it wouldn't take much 'internal' knowledge of juju to get started on it
[01:25] <sarnold> lazyPower: .. and perhaps the agents are easier than the orchestrator
[01:25] <lazyPower> thats a fair assumption
[01:25] <hnix> lazypower : hhhhhh we love fast bootstrap
[01:26] <lazyPower> https://bugs.launchpad.net/juju-core/+bug/1273216
[01:26] <mup> Bug #1273216: unknown --series to add-machine breaks provisioner <juju-core:Triaged> <https://launchpad.net/bugs/1273216>
[01:26] <lazyPower> that looks like it would be a good one to add to as well - as you can define a list of we support, and do dict lookup
[01:26] <sarnold> oh, that one looks easier
[01:26] <sarnold> juts on a blind guess, anyway :)
[01:30] <lazyPower> https://bugs.launchpad.net/juju-core/+bugs?field.tag=bitesized
[01:30] <lazyPower> there's some
[01:31] <lazyPower> https://bugs.launchpad.net/juju-core/+bug/1320218 - probably being the easiest out of all that i found... yaml lint + error message
[01:31] <mup> Bug #1320218: can't read environment file no hint of actual yaml error <bitesize> <config> <ui> <juju-core:Triaged> <https://launchpad.net/bugs/1320218>
[01:32] <hnix> lazypower : i think that i follow https://github.com/chuckbutler/rails-charm/tree/rbenv_migration : 1 week ago and also the steps do not  work for me . it think it a bad start with .
[01:32] <sarnold> maybe... I can imagine getting a few of those cases would be easy but 'solving' that in full generality might take some work
[01:33] <lazyPower> hnix: i'm not sure i parsed that - you tried the branch and it failed yes? thats known - its still WIP as there's quite a bit to be done to fully triage a migration from rvm => rbenv
[01:33] <lazyPower> plus we're going from a localized ruby installation to a system wide rbenv installation that scopes to users so you're not installing system wide
[01:33] <lazyPower> the idea is to move to a 'bullet proof deployment' model where binstubs are used, and localized ruby copies can be embedded in the application itself
[01:34] <lazyPower> i see no reason to 'hide ruby' on the system which is what the current charm does, it creates rvm wrappers - which are nearly impossible to consume outside of the charm hook context.
[01:47] <hnix> the app that im trying to deploy  https://github.com/ziralabs/E-learing-app it's a learning app created to Get  my master Degree in education ... so after that i lunch it public for any one who want to know how to integrate  a REPL build with NodeJS into a Rails APP
[01:49] <stokachu> tvansteenburgh: do you know if python3 version of jujuclient is making its way into the archive soon?
[01:55] <lazyPower> stokachu: hazmat's latest release of jujuclient is py2 and py3 compliant
[01:55] <lazyPower> stokachu: https://github.com/kapilt/python-jujuclient/blob/master/jujuclient.py#L35
[01:56] <lazyPower> oh you asked if it was getting into the archive - sorry jumped the gun there.
[06:21] <toyo|work> I cant seem to get juju to bootstrap on version 1.20.11
[06:22] <toyo|work> Juju cannot bootstrap because no tools are available for your environment.
[09:11] <gnuoy> jamespage, would you mind takine a look at https://code.launchpad.net/~gnuoy/charm-helpers/1422386/+merge/249837 if/when you have a moment ?
[09:32] <jamespage> gnuoy, to my unaware brain it looks OK
[09:34] <stub> gnuoy: Did you fix https://bugs.launchpad.net/charm-helpers/+bug/1423176 with your nrpe landing?
[09:34] <mup> Bug #1423176: test_nrpe tests broken on trunk <Charm Helpers:New> <https://launchpad.net/bugs/1423176>
[09:38] <gnuoy> stub, yes, it fixed and I'm hanging my head in shame for not having spotted the unit test break
[09:45] <Murali> hi jamespage I deployed using lxc like juju deploy --to lxc:0 openstack-dashboard
[09:45] <Murali> my juju status is pending frol long time
[09:46] <Murali> am I missing something here
[09:46] <jamespage> Murali, the first container can take a bit of time on each physical host depending on how fast your internet connection is
[09:46] <jamespage> it was to pull down the right cloud image and unpack it for use locally
[09:47] <Murali> ok
[10:15] <Murali> Hi jamespage
[10:17] <Murali> even when i deploy keystone and dashboard on different nodes, the login prompts an error saying "authentication failed for admin. please try again ".. been stuck on this since some time
[10:18] <Murali> Is there any logs which could give me some hints.. No errors found on juju debug logs
[10:40] <jamespage> Murali, so I'd check the keystone log files (/var/log/keystone) and anything in /var/log/apache (on the dashboard unit)
[10:41] <Murali> On apache , we get a log saying, login failed for user.. and on keystone logs also we couldnt find any thing
[10:42] <jamespage> Murali, OK - so lets bypass the dashboard and make sure that you can actually access keystone from the cli
[10:43] <jamespage> Murali, http://paste.ubuntu.com/10306078/
[10:43] <jamespage> Murali, please adapt that for your environment - specifically the password
[10:43] <Murali> ok
[10:45] <jamespage> Murali, you can then source that file and try a 'keystone catalog' from a command line please
[10:45] <jamespage> Murali, trying to see if you have a keystone problem or a dashboard problem
[10:46] <Murali> Ok
[10:48] <jamespage> Murali, fyi I have a few errands to run in about 15 mins which means you won't get responses from me for about 1.5-2hrs
[10:48] <jamespage> (not ignoring you :-))
[10:49] <Murali> thanks for your helps.. we will try out the steps you mentioned...
[10:53] <jamespage> Murali, I'll ask the question but you have added a relation between keystone and the dashboard right?
[10:53] <jamespage> Murali, just checking ;-)
[10:58] <Murali> yes jamespage added the relation
[10:58] <jamespage> good
[11:11] <Murali> http://paste.ubuntu.com/10306455/ .. this works fine :P
[12:28] <valentyn> Hey, i know that it is possible to deploy to a running juju machine, but is it possible to start a "clean" instance (without any service) and to deploy services to that machine afterwards? thanks
[12:45] <stub> tvansteenburgh: If you are able I'd love another run of lp:~stub/charms/trusty/cassandra/spike on the new Jenkins - I'm hoping all is good now with the 2GB VMs.
[12:45] <tvansteenburgh> stub: will od
[12:45] <tvansteenburgh> do
[12:46] <stub> yeah, please don't OD on me.
[12:46] <tvansteenburgh> hahah
[12:48] <tvansteenburgh> stub: running now
[12:48] <stub> ta muchly
[12:48] <tvansteenburgh> you bet
[13:57] <lazyPower> valentyn: juju add-machine will allocate it in the unit pool, and you can juju deploy --to afterwords.
[15:00] <Crypticus> hello all
[15:00] <Crypticus> I have an issue with Juju I was hoping someone might have some insight
[15:01] <Crypticus> quantum-gateway has bee in the "agent-state: installed" for over 12 hours
[15:01] <Crypticus> I checked the logs and there were no errors.
[15:01] <Crypticus> Is this normal? for this charm? I didn't want to add the relationships until I was sure this was not broken
[15:11] <Murali> hi jamespage
[15:11] <jamespage> Murali, o/
[15:22] <Murali> http://paste.ubuntu.com/10309259/
[15:22] <Murali> we got this error while following the https://jujucharms.com/openstack/
[15:23] <Murali> still installation is going on
[15:37] <Murali> jamespage all the lxc containers status is in pending
[15:37] <Murali> but the internet is at good spped
[15:37] <Murali> speed
[15:37] <jamespage> Murali, does your hardware match the requirements in the README for that bundle?
[15:37] <Murali> is there anything we are missing
[15:38] <jamespage> Murali, I'd drop onto one of the machines and check in /var/log/juju/*
[15:38] <Murali> yes hardware matches 8 gb ram
[15:45] <jamespage> Murali, and the network and disk requirements?
[15:45] <jamespage> Murali, the error you detail above would indicate that maybe your servers don't have two network ports?
[15:45] <Murali> we have 2 disks
[15:45] <jamespage> Murali, good - that enables the ceph bits
[15:45] <Murali> we have 2 nics but name is em1
[15:45] <jamespage> Murali, ah right
[15:45] <Murali> not eth0 and eth1
[15:46] <jamespage> Murali, this is tricky - but not hard to fix - I'd just grab the bundle itself and change it so that the reference to 'eth1' is switch to em1
[15:46] <jamespage> Murali, you can use juju-deployer to deploy the bundle
[15:46] <jamespage> Murali, or you can do a juju set neutron-gateway ext-port=em1 now
[15:46] <jamespage> and then resolve the error
[15:46] <jamespage> juju resolved --retry neutron-gateway/0
[15:46] <Murali> ohh that problem is fine
[15:47] <Murali> But we are seeing the lxc container is in pending state let me send you
[15:48] <jamespage> Murali, ok
[15:48] <jamespage> Murali, checkout the logs in /var/log/juju
[15:48] <Murali> http://paste.ubuntu.com/10309730
[15:48] <jamespage> and checkout "sudo lxc-ls -f" on any of the physical hosts
[15:48] <Murali> ok
[15:50] <Murali> http://paste.ubuntu.com/10309745
[15:56] <Murali> now we are not seeing any activity on debug logs also
[16:02] <Murali> jamespage its late night for us. leaving now I will be  off from network
[16:03] <Murali> thanks a lot for help
[16:03] <jamespage> Murali, hmm - they are not getting ip address - check out the bridge configuration on the boxes - specifically juju-br0
[16:03] <Murali> ohh ok
[16:04] <Murali> so who will allocate IP address for lxc
[16:06] <jamespage> Murali, well actually MAAS and the lxc containers just boot off standard DHCP on the network
[16:06] <jamespage> Murali, are you using MAAS DHCP?
[16:07] <Murali> yes we are using MAAS-DHCP
[16:09] <jamespage> Murali, are you ok for a few minutes - I think we nearly ahve the root cause
[16:09] <jamespage> ?
[16:09] <jamespage> Murali, can you check the bridge configuration - using brctl show
[16:10] <jamespage> juju-br0 should have all of the lxc container ports listed and the right external port - which for your system is em0 I think
[16:10] <Murali> sure jamespage
[16:10] <jamespage> Murali, don't want to keep you from required beer after a long day of hacking :-)
[16:11] <jamespage> beer/<insert your drink of choice here>
[16:11] <Murali> :P
[16:11] <jamespage> hmm beer
[16:11] <Murali> http://paste.ubuntu.com/10310014
[16:12] <jamespage> only 4pm my time
[16:12] <jamespage> Murali, ok this is the problem
[16:12] <jamespage> Murali, can you do me an ip addr on that box as well
[16:12] <jamespage> dimitern, ^^ eek
[16:12] <jamespage> Murali, can you also confirm which juju and maas versions you are using
[16:13] <jamespage> Murali, juju-br0 is normally wired to eth0
[16:13] <jamespage> Murali, but it looks like juju is not dealing well with emX naming
[16:13] <Murali> ok
[16:14] <jamespage> Murali, that's a healthy one - http://paste.ubuntu.com/10310039/
[16:14] <Murali> maas version is 1.5.4
[16:16] <Murali> is there any way to come out of this issue
[16:16] <Murali> or resolve it
[16:17] <jamespage> Murali, probably
[16:17] <jamespage> I'm just pondering it now
[16:17] <jamespage> Murali, also hunting down the juju dev I know who does networking bits like this
[16:18] <Murali> thanks jamespage
[16:18] <Murali> our juju version is 1.21.1
[16:24] <jamespage> Murali, can you also pastebin ip addr and /etc/network/interfaces please
[16:26] <dimitern> Murali, jamespage, hey guys
[16:27] <jamespage> dimitern, Murali is right at the end of day
[16:27] <dimitern> Murali, can you also paste the contents of the maas lshw XML dump of the machine in question (the one done during commissioning by maas)?
[16:28] <jamespage> thanks for helping out dimitern
[16:28] <dimitern> jamespage, np
[16:29] <dimitern> IIRC there was a bug filed about having emX instead of ethX NICs
[16:29] <Murali> http://paste.ubuntu.com/10310220/
[16:30] <Murali> maas lshw and ifconfig are as per the above pated
[16:30] <Murali> maas is reading eth1 eventhough it is em1 in ifconfig
[16:31] <dimitern> Murali, I knew something like that might be the issue
[16:31] <dimitern> Murali, how come that /etc/network/interfaces config is using other names?
[16:32] <dimitern> Murali, is it something done during the node deployment (by maas) or something done later (after juju started on it - maybe a charm?)
[16:32] <jamespage> dimitern, this is a bundle deploy of openstack from the charmstore
[16:33] <jamespage> hey scuttlemonkey
[16:33] <Murali> http://paste.ubuntu.com/10310282
[16:34] <dimitern> Murali, yeah, can you paste the contents of /etc/udev/rules.d/* ?
[16:34] <dimitern> Murali, esp. the persistent net rules
[16:35] <scuttlemonkey> jamespage: hola senor
[16:35] <dimitern> Murali, /var/log/cloud-init*.log will be useful to check as well
[16:35] <jamespage> scuttlemonkey, hows that calamari charm coming along? ;-)
[16:36] <scuttlemonkey> jamespage: hehe, no idea :)
[16:36] <scuttlemonkey> I'm drowning in a sea of bureaucracy...my stack doesn't seem to be getting shorter
[16:37] <jamespage> scuttlemonkey, no worries :-)
[16:39] <jamespage> beisner, not sure that we're testig on tyan's - those are the dev boards
[16:39] <Murali> http://paste.ubuntu.com/10310379
[16:40] <beisner> jamespage, ah good point.
[16:41] <Murali> in "etc/udev/rules/"  we have only README file exists
[16:43] <jamespage> dimitern, I think this is a situation that can occur depending on the type of network cards you are using
[16:49] <Murali> is there any way get rid of this issue jamespage/dimitern
[16:50] <dimitern> Murali, jamespage, well, if it's the NIC drivers that are causing this, you might want to try disabling them and using generic ones
[16:50] <jamespage> Murali, to unblock you I'd suggest rewriting the network configuration manually on the physical servers switching eth0 -> em1 and then restart the networking
[16:50] <jamespage> dimitern, I think this is stock kernel behaviour ;-)
[16:50] <dimitern> jamespage, oh boy :/
[16:50] <jamespage> Murali, you may need to nudge the lxc containers manually
[16:50] <jamespage> lxc-stop --name xxx && lxc-start --name xxx
[16:50] <jamespage> afterwards
[16:50] <dimitern> Murali, yes, s/eth1/em1/ in /etc/network/interfaces + reboot should do the trick
[16:51] <Murali> ok
[16:52] <dimitern> Murali, first try fixing the host machines, then if the containers haven't come up, try juju retry-provisioning 0/lxc/0 (one by one)
[16:53] <dimitern> Murali, alternatively :) you could just change the lshw XML dump for these nodes in maas.. or perhaps use a third-party drivers for your hardware, so maas will properly discover the NICs during commissioning
[16:56] <Murali> Where will be this lshw xml file .. path ??
[16:59] <jamespage> Murali, can you check "apt-cache policy biosdevname" on those machines please?
[17:00] <jamespage> Murali, oh wait - I bet that is installed
[17:00] <jamespage> Murali, dimitern: can you confirm whether your machines where installed with the fast installer? its an option that I think in 1.5.x of MAAS you have to enable
[17:00] <Murali> http://paste.ubuntu.com/10310654
[17:01] <jamespage> Murali, dimitern: I suspect a standard d-i based install installs biosdevname which does this remapping magic
[17:01] <dimitern> jamespage, all 4 of my local maas kvms are using the fast installer
[17:01] <jamespage> Murali, the option is in the MAAS UI - you can turn fast installer on/off for any machine
[17:01] <dimitern> jamespage, good catch! this sounds like the likely cause
[17:02] <jamespage> dimitern, well if it is the cause then yes
[17:02] <jamespage> dimitern, I checked on our MAAS based openstack installs == no biosdevname
[17:02] <Murali> we were not using the fastinstaller till now
[17:02] <jamespage> my local dhcp/dns server installed from usb stick has em1
[17:03] <jamespage> Murali, switching to the fast installer should fix this problem - but we need a bug for this
[17:03] <jamespage> I'll raise one now
[17:03] <dimitern> jamespage, awesome! I'll do a quick test with one of my machines to see if switching off curtin will reproduce it
[17:05] <Murali> jamespage are you suggesting to use "use the fast installer" or default one
[17:06] <jamespage> Murali, use the fast installer for all of them
[17:06] <jamespage> all machines that is
[17:06] <jamespage> Murali, its an image based install - no biosdevname so it should match the data reported in MAAS which juju uses.
[17:08] <jamespage> dimitern, https://bugs.launchpad.net/juju-core/+bug/1423626
[17:08] <mup> Bug #1423626: Inconsistent device naming depending on install method <juju-core:New> <MAAS:New> <maas (Ubuntu):New> <https://launchpad.net/bugs/1423626>
[17:09] <dimitern> jamespage, ta
[17:09] <jamespage> dimitern, I pinged the maas guys as well
[17:10] <dimitern> jamespage, cheers - I'm updating bug 1423372 with this info
[17:10] <mup> Bug #1423372: juju-br0 is not configured correctly on machines without ethX interfaces <jujud> <network> <juju-core:New> <https://launchpad.net/bugs/1423372>
[17:15] <Murali> thanks a lot for great help jamespage/dimitern
[17:15] <Murali> we are now re-bootstraping using the fast installer option
[17:16] <jamespage> Murali, hopefully that will sort you out
[17:16] <Murali> ok
[17:16] <dimitern> Murali, np, glad we could help!
[17:16] <jamespage> dimitern, how about using the mac address to id the interface - then you don't need to worry about renaming
[17:17] <jamespage> dimitern, fyi this will get worse under systemd
[17:17] <jamespage> which I think does all this anyway
[17:19] <dimitern> jamespage, I've responded in #maas as well - I don't know how to discover the NIC name (which I need to generate /etc/n/i file at the time userdata for cloud-init is generated) from its MAC address
[17:22] <jamespage> dimitern, ah yes tricky
[17:23] <jamespage> dimitern, I guess you'd have to inject a script that runs and interrogates the interfaces and then writes the config as part of cloud-init
[17:24] <dimitern> jamespage, good idea - a bootcmd that does that can work not only for maas
[17:40] <lazyPower> o/ tvansteenburgh
[17:40] <lazyPower> can i bother you for a moment regarding what you did yesterday to kick off that passing test? i tried to reproduce it and i got a failure in ci, so i didnt apply the magic tim seasoning
[18:01] <tvansteenburgh> lp|charm-school: looking now
[18:02] <tvansteenburgh> lp|charm-school: actually i'm not sure which build it it, send me a link when you have a sec
[18:03] <tvansteenburgh> s/it it/it is/
[18:03] <jw4> tvansteenburgh: nice palindrome
[18:04] <tvansteenburgh> omg
[18:04] <jw4> hehe
[18:10] <lazyPower> nice!
[18:10] <lazyPower> tvansteenburgh: sure, did you needt he passing build or the repository?
[18:10] <tvansteenburgh> a link to the build that failed
[18:11] <lazyPower> oh :( that i dont have, let me kick it off again and i'll ship it over. sorry - i didnt think to save it
[18:11] <tvansteenburgh> surely we can find it
[18:11] <lazyPower> i think it was build 87
[18:11] <tvansteenburgh> which job name?
[18:11] <tvansteenburgh> the new or old one
[18:11] <lazyPower> nope, thats the pass
[18:11] <lazyPower> 1 sec
[18:12] <lazyPower> http://juju-ci.vapour.ws:8080/job/charm-bundle-test-aws/86/console
[18:13] <tvansteenburgh> lazyPower: ok so that was just before the one that passed
[18:13] <lazyPower> yeah i cannot find the azure job i kicked off that failed, i just realized that was an aws substrate
[18:13] <lazyPower> it vanished...
[18:13] <lazyPower> 88 is a cassandra test
[18:14] <lazyPower> let me kick another oen off and get fresh results, sorry about this :( i should have had that in hand
[18:14] <tvansteenburgh> this one? http://juju-ci.vapour.ws:8080/job/charm-bundle-test-azure/41/console
[18:15] <tvansteenburgh> anyway, feel free to run another one if you want
[18:24] <jw4> Who are the senior architects of Canonicals Openstack strategy that will be presenting the product roadmap in San Fancisco on March 11th?  The OpenStack Roadshow...
[20:19] <hazmat> jw4: probably not the best place to ask
[20:20] <jw4> hazmat: hrm, yeah. (better suggestion?)
[20:50] <hazmat> via pm
[20:50] <jw4> yep, tx hazmat
[22:05] <mbruzek> tvansteenburgh: Can you review https://github.com/juju-solutions/bundletester/pull/11 when you get a chance?
[22:06] <tvansteenburgh> mbruzek: yup
[22:06] <mbruzek> tvansteenburgh: thank you, I am going to change the kubernetes tests to look for the BUNDLE environment variable so our tests run successfully.
[22:09] <tvansteenburgh> mbruzek: merged and jenkins updated
[22:09] <mbruzek> tvansteenburgh: Thank you
[22:09] <mbruzek> tvansteenburgh: just on aws?
[22:09] <tvansteenburgh> mbruzek: no, all
[22:10] <mbruzek> tvansteenburgh: sweet
[22:10] <mbruzek> Thank you
[22:17] <lazyPower> tvansteenburgh: sorry i failed on following up with that job run i got seriously sidetracked. I'll send something over tomorrow morning after i've done my due dilligence
[22:17] <lazyPower> try and sneak that in before you get moving on your daily tasks
[22:17] <lazyPower> o/ enjoy your evening
[22:17] <tvansteenburgh> lazyPower: sure, sounds good , thanks