[10:45] <aisrael> Any conjure-up folks around? I installed the snap from the beta channel and `conjure-up` fails with: ImportError: No module named 'conjureup'
[10:53] <aisrael> https://github.com/conjure-up/conjure-up/issues/652
[12:11] <Odd_Bloke> stokachu: ^
[12:17] <aisrael> stokachu, for context: I'm at the Juju Summit, and wanted to test conjure-up before my talk. We're pushing conjure-up pretty hard, so I'm hoping my failure is just a one-off
[13:12] <stokachu> aisrael, you are running the snap one from /snap/bin/conjure-up?
[13:12] <aisrael> stokachu, yep
[13:13] <stokachu> aisrael, one sec
[13:13] <stokachu> aisrael, hmm, do you have any pyenv environment enabled?
[13:14] <stokachu> on a fresh 16.04 install i dont see this problem
[13:14] <aisrael> stokachu, nope, nothing special wrt pyenv
[13:16] <stokachu> aisrael, hmm..
[13:19] <stokachu> aisrael, whats the output of `env`?
[13:21] <aisrael> http://pastebin.ubuntu.com/23940964/
[13:21] <stokachu> starting up a fresh system again to make sure
[13:23] <stokachu> aisrael, xenial?
[13:23] <aisrael> stokachu, Yep, 16.04.1, up to date as of this morning
[13:23] <stokachu> ok
[13:26] <stokachu> aisrael, hmm im having trouble reproducing, i just did a fresh maas deploy of 16.04.1
[13:27] <stokachu> aisrael, maybe create a fresh user account and try running it with that user
[13:27] <aisrael> stokachu, ok. I'll fire up an lxd container and see if I can reproduce it locally. It might just be something on my system, and if so, I'll try to debug and provide steps to reproduce.
[13:28] <stokachu> aisrael, thanks, the python plugin for snap is pretty specific when it comes to the interpreter it runs
[13:28] <stokachu> so it may be something with that im missing
[13:31] <aisrael> I can also try building the snap locally and see if that changes anything
[13:33] <stokachu> aisrael, thanks, it uses python 3.6 within the snap, and unsquashfs -l conjure-up*snap should show you the module path
[13:34] <stokachu> aisrael, http://paste.ubuntu.com/23941001/
[13:34] <stokachu> so the module is definitely there
[13:41] <aisrael> I only have python3.5 installed. Does the snap include the 3.6 runtime?
[13:51] <stokachu> yea it does
[14:00] <stokachu> aisrael, any luck?
[14:26] <stokachu> mup, ping
[14:26] <mup> stokachu: I apologize, but I'm pretty strict about only responding to known commands.
[14:44] <admcleod> hi - im trying to use a bundle (juju 2.1, juju deploy) with manually provisioned machines. the machines are provisioned and numbered 0,1,2,3,4, the bundle refers to them as such also, and the placement of the charms in the bundle refers to them either as that number (e.g. charm ceph-mon, to: 1, or to: - lxd:1, etc) - but when i try to deploy, it tells me "cannot create machine for holding"
[14:48] <bdx> admcleod: sup, the machines specified in the bundle don't correlate to the machine numbers in the juju environment
[14:49] <admcleod> bdx: thats what i was thinking
[14:50] <bdx> the machine placement directives in the bundle are context sensitive to that bundle, and deploying a bundle will always try to provision new machines, not map to the machines in the bundle :(
[14:50] <admcleod> bdx: well, thats peculiar since the rest of the error says:
[14:50] <admcleod> cannot add a new machine: use "juju add-machine ssh:[user@]<host>" to provision machines
[14:50] <admcleod> right. but they're already there...
[14:51] <bdx> yeah .. that would make sense, because the bundle doesn't know about them, and because manual provider, juju doesn't have access to a hoard of resources to create new machines with
[14:52] <admcleod> bdx: right. but if it says 'add a machine manually', then if i was to do that, presumably it wouldnt know, and ask me to add another machine manually
[14:52] <bdx> yea .. exactly
[14:52] <admcleod> bdx: that seems like a bug
[14:53] <bdx> bundle deploys <-> manual provider have never worked for me ... yea totally
[14:53] <sparkiegeek> wasn't there a ML thread about this?
[14:53] <bdx> admcleod: but what if you want new machines? ^ doesn't fit the rest of the use cases though
[14:54] <admcleod> sparkiegeek: ??
[14:54] <sparkiegeek> admcleod: https://lists.ubuntu.com/archives/juju/2016-December/008367.html
[14:55] <admcleod> sparkiegeek: thanks
[14:55] <sparkiegeek> admcleod: np
[15:03] <stokachu> aisrael, were you able to get conjure-up going?
[15:30] <aisrael> stokachu, not yet. Just finished with my talk, though, so I'll give it a try
[15:56] <Zic> hi, if somebody of the CDK team is here, I got the same error (Unable to authenticate the request due to an error: crypto/rsa: verification error)on a fresh brand new cluster
[15:56] <Zic> with no operation post-install
[15:57] <Zic> we just poweroff-ed one master to test the HA, and when we poweron-ed it, it came back with this error which totally mess the whole cluster :/
[15:57] <Zic> http://paste.ubuntu.com/23941613/
[15:59] <magicaltrout> hmm dunno if jcastro has irc open to ping one of the CDK guys
[15:59] <magicaltrout> they're all in Gent
[16:01] <Zic> the last time, this error appeared few days after some operations on our side, so we didn't know how to reproduce
[16:02] <Zic> but here, it's *just* after the end of the install (when all juju status was green)
[16:02] <magicaltrout> i've tried to ping them on twitter Zic
[16:02] <Zic> thanks :)
[16:02] <magicaltrout> we shall see how connected they are
[16:02] <Budgie^Smore> or the mailing list, I know lazy said he would see things there
[16:03] <magicaltrout> yeah but its not real time is it! ;)
[16:04] <rick_h> The signal fires are lit
[16:04] <magicaltrout> hehe
[16:05] <lazyPower> magicaltrout - state the nature of your kubernetes emergency
[16:05] <magicaltrout> hehe
[16:05] <magicaltrout> not mine
[16:05] <Budgie^Smore> speak of the devil! ;-)
[16:05] <Zic> lazyPower: guess who is crashing everything? \o/
[16:05] <magicaltrout> Zic here has broken his cluster
[16:05] <lazyPower> Zic - ah, i could have guessed :) I spoke about you today in my talk
[16:05] <lazyPower> ok so what seems to be the trouble Zic?
[16:06] <magicaltrout> was it something like "poor guy, keeps testing our code and breaking things?" ;)
[16:06] <Zic> lazyPower: you remembered the credential error with "Unable to authenticate the request due to an error: crypto/rsa: verification error"?
[16:06] <lazyPower> magicaltrout - nah, it was actually describing how he was able to make an irrecoverable state change, and i was quite impressed
[16:06] <Zic> I had it after some days of operation and it was not clear how we can reproduce it
[16:06] <lazyPower> Zic i do remember something about this
[16:07] <lazyPower> Zic did you manage to reproduce it? or is just showing up again without a clear indicator as to why?
[16:07] <Zic> here, I just deployed a brand new cluster and just after the end of install (`juju status` showed all green) it starter immediately after we rebooted one of the 3 masters
[16:07] <Zic> started*
[16:07] <lazyPower> ahhh
[16:07] <lazyPower> so restarting a master has triggered a cryptographic verification error?
[16:07] <Zic> yeah
[16:08] <lazyPower> Zic - we definitely need to document that in a bug and take a look
[16:09] <Zic> we did the check that you pointed me to the last time : check that /srv/kubernetes corresponding with openssl x509, it does not change
[16:09] <Zic> maybe it's the k8s token? but I don't know how they work
[16:09] <lazyPower> ah good, so the TLS cert was verified
[16:09] <lazyPower> good call, i bet i know whats happened Zic
[16:10] <lazyPower> Zic - those tokens need to be syc'd among the masters, and they aren't right now. so you've got a new token that it loaded into the k8s object store during turn up
[16:10] <lazyPower> and its not agree'ing with what the other msters have.
[16:10] <lazyPower> which would be why it showed up after reboot, not before
[16:11] <lazyPower> actually i'm kind of perturbed we didn't find this before reboot
[16:12] <Zic> we think about his because one of the event of the Ingress controller was about readyness
[16:12] <Zic> kube-dns also
[16:13] <lazyPower> Zic - they mount the default service token right?
[16:13] <Zic> -> requests that ask the API with token
[16:13] <lazyPower> Zic - so the token is different across all 3 masters
[16:13] <Zic> lazyPower: how can I confirm that?
[16:13] <lazyPower> 1 sec i'm turning up k8s on my laptop
[16:14] <lazyPower> Zic - check in /srv/kubernetes/known_tokens.csv
[16:14] <lazyPower> all 3 masters will have a different token in that file is my guess
[16:15] <Zic> http://paste.ubuntu.com/23941724/
[16:16] <lazyPower> Zic - seems like thats the culprit
[16:16] <lazyPower> Zic - great pointer on this one
[16:19] <Zic> lazyPower: just before this error (and just after the install, no additionnal extra OP) we discovered that something went wrong because kube-dns and one of the Ingress controller was in CLBO
[16:19] <Zic> do you confirm it can be tied to this problem? as it probes the readiness through API I believe
[16:21] <lazyPower> Zic - yeah you should be able to confirm that with a kubectl describe
[16:21] <lazyPower> Zic - if its failed contacting the API with that token, i'm mostly certain you'll get that as output in the kubectl describe of that pod
[16:21] <lazyPower> i may be incorrect and it only say it entered a failed/crashed state
[16:21] <lazyPower> but i'm mostly certain it'll give you more detailed info w/ regards to an api contact failure
[16:22] <Zic> http://paste.ubuntu.com/23941761/
[16:22] <Zic> (for kube-dns)
[16:22] <lazyPower> yap, i'm wrong
[16:22] <lazyPower> it would have to be verified with log output
[16:23] <aisrael> stokachu, I'm running into a problem building the snap. It wants me to install the core snap, but I try and it won't because ubuntu-core is already installed (and uninstallable). Run into that before?
[16:24] <Zic> mp          10m         11m         18        zk-0               Pod                                 Warning   FailedSync         {kubelet mth-k8s-01}          Error syncing pod, skipping: failed to "SetupNetwork" for "zk-0_mp" with SetupNetworkError: "Failed to setup network for pod \"zk-0_mp(c5559d40-eafe-11e6-a57e-0050569efde9)\" using network plugins \"cni\": \"cni0\" already has an IP address different
[16:24] <Zic> from 10.1.6.1/24; Skipping pod"
[16:24] <stokachu> aisrael, what about `sudo snap refresh ubuntu-core`
[16:24] <stokachu> aisrael, see if it'll update and rename it
[16:24] <aisrael> no updates available
[16:24] <stokachu> try sudo snap refresh core
[16:24] <Zic> lazyPower: I have this one also, but it's on the old cluster (not the fresh one) don't know if we can link all this bugs
[16:25] <lazyPower> Zic - that CNI failure seems to be bubbling up FROM CNI itself though
[16:26] <lazyPower> Zic - wsa this similar in symptom though?
[16:26] <aisrael> stokachu, no luck. can't refresh 'core', can't find 'core'
[16:26] <lazyPower> it started with cryptographic errors and yielded broken behavior?
[16:26] <stokachu> aisrael, what does `snap version` show
[16:26] <lazyPower> Zic - also, why aren't you here :) :) We could be hacking on this in real time
[16:27] <aisrael> snap    2.21
[16:27] <aisrael> snapd   2.21
[16:27] <aisrael> series  16
[16:27] <aisrael> ubuntu  16.04
[16:27] <stokachu> well wth
[16:27] <Zic> lazyPower: because I'm stick with the customer to debug this x)
[16:28] <lazyPower> Zic - also, thanks for following up with this.
[16:28] <stokachu> aisrael, can you hop on #snappy on freenode
[16:28] <lazyPower> Zic ah good reason. I'll see if i can get this fixed up in a branch tonight fo you to deploy and test with.
[16:30] <Zic> lazyPower: but I'm counting on you to tell me all your stories about the Juju Summit :p
[16:30] <lazyPower> Zic ah, let me show you this :D
[16:30] <lazyPower> https://docs.google.com/presentation/d/1kJ-OeazxivyMkQbDC20wD-NDkW4T-H8BUTfQNxw3wOE/pub?start=false&loop=false&delayms=3000&slide=id.g1c62525350_1_90
[16:30] <Zic> or we'll be glad to welcome you at my company x)
[16:30] <Zic> it's not to far from where you are :)
[16:31] <lazyPower> hehe great :D
[16:31] <Zic> Gent -> Paris, and we offer you pizza :D
[16:37] <Zic> lazyPower: the best part is about "Operators and devs be like" slide :p
[16:40] <Zic> lazyPower: can I do something for you before the hotfixing comes? to help you with?
[16:40] <lazyPower> Zic -well
[16:41] <lazyPower> you're gonna be the primary driver of testing
[16:41] <lazyPower> that's enough for me. I don't think there's anything other than ensuring the bug is filed so i can link it on the commit.
[16:41] <Budgie^Smore> So etcd is becoming the new duck tape is what I like to say :P
[16:41] <lazyPower> Budgie^Smore - hahaha
[16:41] <lazyPower> nailed it
[16:42] <Budgie^Smore> lazyPower was your talk recorded by chance?
[16:43] <xnox> stokachu, what is the difference between "kubernetes core" and "canonical distribution of kubernetes" when doing conjure up?
[16:43] <lazyPower> Budgie^Smore - it was
[16:43] <lazyPower> xnox - kubernetes-core is a much lighter bundle, it only rqeuires 3 units
[16:44] <lazyPower> xnox canonical-distribution-of-kubernetes requires 9 units out of the gate, and introduces an API LoadBalancer to work around kubernetes not properly contacting the masters in an HA formation (they only contact the first)
[16:44] <lazyPower> so those are the 2 initial differences
[16:44] <xnox> lazyPower, is the charm and the binary that runs kubernetes the same?
[16:44] <lazyPower> xnox correct
[16:44] <xnox> good
[16:44] <xnox> lazyPower, thanks a lot!
[16:45] <Budgie^Smore> lazyPower I think I found a problem with the loadbalancer, was trying to use helm the other day and it couldn't do a port-forward command
[16:45] <lazyPower> xnox no problem :) happy to help
[16:45] <lazyPower> Budgie^Smore - we documented a work-around for the mean time
[16:45] <xnox> lazyPower, where can i do the lookup to where the conjure up "recipe is", bundle, and charms?
[16:45] <lazyPower> its known that helm hates our LB, it doesn't talk SPDY
[16:45] <Budgie^Smore> OK :) just checking, decided not to use helm at the moment and just use their charts as templates to self deploy
[16:48] <Zic> ok, I will fill the bug lazyPower :) https://github.com/juju/juju is the right place?
[16:48] <lazyPower> Zic - github.com/kubernetes/kubernetes
[16:49] <lazyPower> Budgie^Smore - ok, we're goig to be replacing the current APILB with a layer4 reverse proxy
[16:49] <lazyPower> so that issue should have a shelf life
[16:49] <Budgie^Smore> awersome, might have a use for helm stuff then :)
[16:50] <stokachu> xnox, https://github.com/conjure-up/spells/tree/master/canonical-kubernetes
[16:50] <stokachu> xnox, metadata will have a link to the bundle location
[16:50] <xnox> spell - that's the word!
[16:51] <lazyPower> man, as much as i like interfacing with you people but im ready to get back to engineering :P
[16:51] <lazyPower> as i'm sure you who are patiently waiting for fixes are ready for too
[16:51] <Budgie^Smore> nope, coming up with work aroundsi in the meantime ;-)
[16:52] <stokachu> xnox, were you able to navigate the ui without it freezing?
[16:52] <stokachu> xnox, i saw you mention that earlier
[16:53] <xnox> stokachu, i have not. but i was advised at fosdem that maybe i should not be trying to do this with: zesty, on ipv6 only network
[16:53] <stokachu> xnox, ah
[16:53] <stokachu> xnox, yea ipv6 :(
[16:54] <xnox> stokachu, at fosdem they have ipv6 networking only by default with dns64
[16:54] <xnox> ( https://fosdem.org/2017/ )
[16:54] <stokachu> xnox, fancy :)
[16:54] <xnox> it's a free event as well =)
[17:18] <xilet> Working on my first charm, are there any simple ways to have a templated config file brought in that does a simple replace on hostname? ( All of the samples I found were specific to applications requiring a good bit of python ). I mean I can always use a language to do it but I don't know if the charm system has something already in place.
[17:33] <stokachu> xilet, you doing it in bash?
[17:33] <stokachu> xilet, are using charmhelpers?
[17:38] <xilet> Right now all of the hooks are in bash
[17:56] <xilet> Let me ask a concrete example. If I wanted to add a line to /etc/hosts with <ip address> juju_server_<unit ID>     what would be the best way to accomplish that?
[18:06] <vagarwal> thanks to all the speakers and organizers - nice talks today
[19:37] <stokachu> aisrael, your talk go ok today?
[19:37] <stokachu> arosales, conjure-up work out for you all?
[19:38] <arosales> stokachu: so far so good. More work shop time tomorrow
[19:38] <stokachu> arosales, cool man
[19:38] <arosales> but encouraged folks to try what they saw in presos today with conjure
[19:39] <stokachu> arosales, sweet hopefully you'll get some good feedback
[19:39] <arosales> and increased usage of conjure
[19:39] <stokachu> yea im looking at the numbers now
[19:40] <stokachu> 10 for spark processing today
[20:10] <stormmore> so has anyone worked on securing the CDK API load balancer so it doesn't use HTTP?
[22:40] <catbus1> stokachu: hi, I tried  to install conjure-up on 16.04 but it says --classic flag is unknown.
[22:41] <catbus1> stokachu: do you know if there is any change to the flag? This is what usermanaul says $ sudo snap install conjure-up --classic --beta
[22:48] <catbus1> using apt install instead
[23:02] <magicaltrout> any charmers awake?
[23:09] <magicaltrout> kwmonroe: you alive?
[23:11] <kwmonroe> yo yo magicaltrout
[23:11] <kwmonroe> how may i direct you to cory_fu today?
[23:11] <magicaltrout> lol
[23:11] <magicaltrout> all your counterparts over here are shit
[23:11] <magicaltrout> and have gone to bed
[23:12] <magicaltrout> charm push . cs:~spiculecharms/apache-solr
[23:12] <magicaltrout> ERROR cannot post archive: unauthorized: access denied for user "spicule"
[23:12] <magicaltrout> any idea?
[23:12] <cory_fu> magicaltrout: Sorry, I'm EOD and headed to bed.  ;)
[23:12] <cory_fu> magicaltrout: Kidding.  Have you tried doing "charm logout" and "charm login"?
[23:13] <magicaltrout> just did that
[23:13] <magicaltrout> same result
[23:13] <cory_fu> Hrm
[23:14] <magicaltrout> we were discussing you earlier. I made the point drinking is so much easier when you aren't around! ;)
[23:14] <cory_fu> magicaltrout: Go to jujucharms.com and log out / in there, then charm logout/in again
[23:14] <cory_fu> ha
[23:15] <magicaltrout> i have this cool trick of hitting weird bugs the night before a talk
[23:17] <kwmonroe> magicaltrout: i'm gonna guess this is because you have all these identities.  who does "charm whoami" think you are?
[23:17] <magicaltrout> bugg@tom-laptop2:~/Projects/charms/builds/apache-solr$ charm login
[23:17] <magicaltrout> Login to Ubuntu SSO
[23:17] <magicaltrout> Press return to select a default value.
[23:17] <magicaltrout> E-Mail: tom@analytical-labs.com
[23:17] <magicaltrout> Password:
[23:17] <magicaltrout> Two-factor auth (Enter for none):
[23:17] <magicaltrout> bugg@tom-laptop2:~/Projects/charms/builds/apache-solr$ charm push . cs:~spiculecharms/apache-solr
[23:17] <magicaltrout> ERROR cannot post archive: unauthorized: access denied for user "spicule"
[23:17] <magicaltrout> bugg@tom-laptop2:~/Projects/charms/builds/apache-solr$ charm whoami
[23:17] <magicaltrout> User: spicule
[23:17] <magicaltrout> Group membership: apachefoundation, apachesoftwarefoundation, charm-contributors, containers
[23:17] <magicaltrout> bugg@tom-laptop2:~/Projects/charms/builds/apache-solr$
[23:18] <magicaltrout> hmm
[23:18] <magicaltrout> so i should be a member of my own group right?
[23:18] <magicaltrout> group/team
[23:18] <magicaltrout> whatever launchpad calls it
[23:20] <kwmonroe> indeed you should
[23:20] <kwmonroe> also, how in the heck did you get into the containers group?  those folks are picky.
[23:20] <magicaltrout> cause i'm bloody amazing
[23:20] <magicaltrout> anyway
[23:20] <kwmonroe> lol
[23:20] <magicaltrout> how did i end up out of my own group
[23:20] <magicaltrout> and how do i get back into it?
[23:21] <cory_fu> magicaltrout: You are a member of the group, but the charm store needs to refresh from LP.  Did you go to jujucharms.com like I said?
[23:22] <kwmonroe> yeah magicaltrout, it does look like spicule is in the group:  https://launchpad.net/~spiculecharms/+members.  do what cory said.
[23:22] <magicaltrout> i did cory_fu
[23:23] <magicaltrout> as if i wouldn't do something cory_fu suggested!
[23:23] <cory_fu> magicaltrout: Hrm.  I don't know.  That's a known issue and that's the fix for it.  Maybe try charm logout, jujucharms.com logout, jujucharms.com login, charm login?
[23:23] <cory_fu> :)
[23:24] <cory_fu> I've definitely had that fix that issue before
[23:24] <kwmonroe> yeah - it's gotta be in an order iirc.  luckily there aren't many permutations to try ;)
[23:26] <kwmonroe> magicaltrout: as a workaround, could you push to ~spicule and use the charm from there?
[23:26] <kwmonroe> then sort out the group membership at a more reasonable hour?
[23:29] <magicalt1out> same fail
[23:30] <magicalt1out> great i'm locked out of pushing my own charms \o/
[23:30] <cory_fu> O_O
[23:30] <magicalt1out> i'll deploy locally and complain on the mailing list
[23:30] <magicalt1out> weird though
[23:31] <kwmonroe> magicaltrout: can you push to one of those other listed groups?  apachefoundation, apachesoftwarefoundation, charm-contributors, containers
[23:32] <magicalt1out> testing
[23:33] <magicalt1out> yes kwmonroe
[23:33] <magicalt1out> that works fine
[23:34] <catbus1> Hi, I tried to add maas cloud to conjure-up, but it errors out with trackback: unable to find: <home folder>/.local/share/juju/accounts.yaml. And there is also 'cattle' not found in juju's bootstrap-config.yaml error message. Is there something I need to do between maas and juju prior to setting this up in conjure-up?
[23:37] <cory_fu> catbus1: I believe those errors are due to not having had run any juju commands before.  I thought conjure-up was updated to resolve that; can you verify if conjure-up is at the latest version?
[23:37] <cory_fu> catbus1: Otherwise, you should be able to run any juju command, such as: juju status
[23:38] <cory_fu> magicalt1out: Sorry I couldn't be of more help.  I really do have to EOD now
[23:38] <magicalt1out> yeah no worries
[23:39] <catbus1> cory_fu: I have the latest conjure-up in 16.04: 2.1.0-0~201701041302~ubuntu16.04.1
[23:40] <catbus1> cory_fu: I don't have any juju controller yet.
[23:41] <catbus1> I assume a juju controller will be created afterwards. conjure-up bootstraps juju.
[23:42] <cory_fu> catbus1: Yes, that's fine.  It's just the bootstrap logic that needs to run.  The relevant issue for conjure-up is https://github.com/conjure-up/conjure-up/issues/641
[23:42] <cory_fu> It seems that it has not been fixed yet after all
[23:43] <magicalt1out> cory_fu: your support services are failing this evening! ;)
[23:43] <cory_fu> magicalt1out: Indeed.  Happens when I try to support things I haven't worked on.  ;)
[23:44] <magicalt1out> hehe
[23:44] <magicalt1out> i blame your colleagues for going to bed....
[23:44] <magicalt1out> and certainly not the fact i'm trying to stand up a demo 12 hours before my presentation....
[23:44] <cory_fu> magicalt1out: Me too
[23:46] <magicalt1out> thanks... that means a lot!
[23:46] <magicalt1out> i also blame kwmonroe
[23:46] <cory_fu> catbus1: I'm afraid I do have to head out for the evening.  I'm not sure what timezone you're in, but if my suggestion of running `juju status` followed by re-running conjure-up doesn't work, I would have to direct you to stokachu tomorrow during the day according to EST
[23:47] <cory_fu> magicalt1out: That goes without saying, I'm sure.  ;)  But for reals, heading out.
[23:47] <cory_fu> o/
[23:47] <kwmonroe> magicalt1out: does pushing to one of your other groups get you un-stuck for the preso tomorrow?
[23:48] <magicalt1out> yeah kwmonroe its not a blocker
[23:48] <magicalt1out> just a bit shit :)
[23:48] <catbus1> cory_fu: thanks, I will ping stokachu tomorrow
[23:53] <kwmonroe> totally agreed magicalt1out