[00:51] <lazyPower> marcoceppi: https://github.com/juju/docs/pull/181 - updated
[00:51] <marcoceppi> lazyPower: cool, I'm about to send an MP your way to fix callouts
[00:52] <lazyPower> haha, sweet. cuz we just broke the callouts on this page with that refactor
[00:52] <marcoceppi> lazyPower: yeah, it's been broken in that we're not doing what we document
[00:53] <marcoceppi> there's a bunch of malformed callouts, i was going to patch them, but instead I'll just make the plugin better
[00:53] <lazyPower> :heart:
[00:56] <marcoceppi> mm, sexy
[00:57] <marcoceppi> it's not pefect but it'll do
[00:57] <marcoceppi> lazyPower: https://github.com/juju/docs/pull/182
[00:58] <marcoceppi> lazyPower: what about arosales feedback?"
[00:58] <lazyPower> That was patched in too
[00:58] <marcoceppi> cool, we need to teach him to comment on the diffs :P
[00:59] <lazyPower> he did
[00:59] <lazyPower> his diff's were a revision behind yours i think?
[00:59] <marcoceppi> he commented directly on the file
[00:59] <lazyPower> if you click on his comments they show them inline
[00:59] <marcoceppi> instead of the diff for the merge request
[00:59] <lazyPower> o
[00:59] <marcoceppi> so it's hard to see when they've been fixed
[00:59] <lazyPower> yesh
[00:59] <marcoceppi> like mine were hiddenb ecause they're out of date now
[00:59] <lazyPower> ok hang on regenerating
[00:59]  * lazyPower drum rolls
[01:00] <lazyPower> \o/
[01:00] <lazyPower> works
[01:00] <marcoceppi> lazyPower: also, for future reference, the bolding on the Note isn't required anymore
[01:00] <marcoceppi> and it can be any word as long as you use !!!!
[01:00] <marcoceppi> err !!!
[01:00] <marcoceppi> so "!!! Warning:" will work
[01:00] <marcoceppi> etc
[01:00] <marcoceppi> a true callout plugin
[01:00] <marcoceppi> what a time to be alive
[01:00] <lazyPower> we are exploring the uncharted waters of writing your own generators
[01:00] <lazyPower> wewt
[01:01] <lazyPower> ok
[01:01] <lazyPower> pull master, we're good to rock on this
[01:01] <marcoceppi> both have landed huzzah, they'll be live around 4a
[01:02] <marcoceppi> 6:00 UTC, fwiw
[01:03] <rick_h_> marcoceppi: lazyPower heads up, I've got a guy looking into how to ingest the docs for use on the upcoming jujucharms.com rework.
[01:03] <marcoceppi> lazyPower: while we're romping around in the docs, we should define version'd branches
[01:03] <lazyPower> orly?
[01:03] <marcoceppi> rick_h_: would that be helpful ^^?
[01:03] <rick_h_> marcoceppi: lazyPower so we might have some requests coming in to help us wrap that around the site and keep it up to date. We'll be wanting to make sure we address keeping it up to date/etc
[01:03] <marcoceppi> rick_h_: cool, style wise or content wise?
[01:03] <rick_h_> marcoceppi: style wise really
[01:04] <rick_h_> marcoceppi: and figuring out how to prenent it in a way that fits with nav/search/etc
[01:04] <rick_h_> we'll be looking to ingest the docs into elasticsearch and building a custom docs search box
[01:04] <marcoceppi> rick_h_: cool, it's pretty straight forward, there's one main template file then it's Markdown and CSS
[01:04] <rick_h_> marcoceppi: k
[01:04] <rick_h_> marcoceppi: lazyPower so if fabrice comes asking strange questions he's researching and doing some proof of concept stuff
[01:04] <lazyPower> ack
[01:04] <marcoceppi> rick_h_: cool, sounds good
[01:05] <marcoceppi> thanks for the heads up
[01:09] <lazyPower> marcoceppi: want to do a hangout to talk about the doc structure + versioning?
[01:09] <marcoceppi> right now?
[01:09] <lazyPower> uhh
[01:09] <lazyPower> when do you want to do it?
[01:09] <marcoceppi> we can do it now
[01:09] <marcoceppi> I was just asking
[01:09] <lazyPower> i mean i can EOD whenever
[01:09] <lazyPower> ye
[01:09] <lazyPower> lets do it now while its resh
[01:09] <lazyPower> *fresh
[01:09] <marcoceppi> join my favorite hangout url
[01:10] <marcoceppi> lazyPower: https://plus.google.com/hangouts/_/canonical.com/iwonderhowlongyoucanmakethesehangouturlsseemstherereallyisnolimitatall
[01:11] <aisrael> o/
[01:18] <lazyPower> o/
[01:45] <marcoceppi> o7
[01:46] <lazyPower> o5
[01:46] <lazyPower> i am quadroupal jointed
[01:51] <marcoceppi> it looks like a guy flexing
[01:52] <lazyPower> http://i.imgur.com/4mYD13u.gif
[02:13] <kadams54> rick_h_: If you're still around… how are we looking for release? I looked around at PRs and the kanban board and everything looked pretty good.
[02:58] <rick_h_> kadams54: everything is going well
[02:58] <kadams54> rick_h_: great!
[02:58] <rick_h_> just got back from coffee shop, right before I left functional charm tests passed on both precise/trusty
[02:58] <rick_h_> so the release is about 5 commands away
[02:58] <kadams54> is `rm -rf /` one of them?
[02:58] <rick_h_> hah, not quite
[03:02] <kadams54> Alright, it's off to bed for me. Here's to a smooth release.
[03:02] <rick_h_> kadams54: night
[03:09] <rick_h_> well they charms are up, waiting on ingest time
[03:09] <rick_h_> will do one final QA, but the code looks good on LP
[03:10] <rick_h_> oh heh, not the GUI channel is it
[09:56]  * bloodearnest is wondering if all the charms that use #!/bin/bash hooks need updating...
[09:57] <gnuoy> jamespage, any chance you could take a look at https://code.launchpad.net/~gnuoy/charms/trusty/keystone/next-lp-1355848/+merge/231529 ?
[09:58] <gnuoy> Tribaal, to you have a moment to take a look at https://code.launchpad.net/~gnuoy/charms/trusty/nova-cloud-controller/next-multi-console-fix/+merge/233612 ?
[12:16] <Tribaal> gnuoy: sorry, was out for a moment. I can look, yes
[12:16] <gnuoy> Tribaal, thanks.
[13:35] <Spads> Hi, so what do I need to do to get juju status to show me floating IPs?
[13:36] <Spads> jjo says it should have visibility, but I'm confused
[13:36] <Spads> use-floating-ip is the only env setting I could find that matched, but I thought that was the old behaviour that made every unit get a floating IP
[13:59] <rcj> Trusty juju tools mismatch on s3... http://paste.ubuntu.com/8425671/
[14:01] <rcj> https://bugs.launchpad.net/juju-core/+bug/1373954
[14:01] <mup> Bug #1373954: juju-tools checksum mismatch for trusty on S3 <juju-core:New> <https://launchpad.net/bugs/1373954>
[14:10] <rcj> Can someone look at Juju tool checksum mismatches blocking bootstrap.  Now seen with Trusty/S3, Precise/Canonistack
[14:13] <rcj> Checksums @ http://streams.canonical.com/juju/tools/streams/v1/com.ubuntu.juju:released:tools.json match content in http://streams.canonical.com/juju/tools/releases/ which I assume is the source for the mirrors that have issues
[14:46] <arosales> marcoceppi: lazyPower: protips for commenting on diffs instead of the files?
[14:46] <arosales> marcoceppi: lazyPower re: git hub
[15:28] <sebas5384> question: Let's say I deploy an env on AWS through my machine, there's any way via other machine could comunicate to the same environment and continue deploying charms?
[15:31] <lazyPower> sebas5384: the othe rmachine would need a copy of your ~/.juju directory
[15:31] <lazyPower> specifically, the ~/.juju/environment_name.jenv
[15:31] <sebas5384> hmmm specifically the .jenv right?
[15:32] <sebas5384> holly s&*T
[15:32] <sebas5384> hehe
[15:32] <sebas5384> thanks lazyPower!
[15:32] <lazyPower> np sebas5384
[15:49] <johnmc> Hi all. Can anyone give me some advice on a problem with juju failing to deploy new LXC machines on a host?
[15:50] <johnmc> Some some unknown reason all new LXC containers intended to be deployed on machine "1" simply stay in pending state forever.
[15:50] <johnmc> This is what "juju stat" looks like http://pastebin.com/5b4D3d23
[15:52] <johnmc> all LXC containers from 1/lxc/22 onwards are pending. I've tried doing a "destroy-machine" and "destroy-machine --force" on them, which is how they ended up with "life: dead".
[15:53] <johnmc> I've looked at log files on machine "1", but can't see anything to suggest why it's not actioning the request for a new LXC container. No errors, nothing.
[15:54] <johnmc> if anyone can suggest logs files etc. I should be looking at that would help a lot, and right now, I've got nothing to go on.
[15:54] <johnmc> all existing LXC containers work fine, and come back after a complete physical host reboot.
[15:58] <natefinch> johnmc: it sounds like an lxc problem, if there's no juju errors.  Juju just tells lxc what to do.  Probably good start would be to ssh into the base machine and do an lxc-ls and see what it spits out
[16:00] <johnmc> Hi nate. I've been on the base machine quite a bit, and not found anything. This is what I get to lxc-ls:
[16:00] <johnmc> root@controller-cam1:~# lxc-ls
[16:00] <johnmc>   juju-machine-1-lxc-15  juju-machine-1-lxc-17  juju-machine-1-lxc-19  juju-machine-1-lxc-21   juju-machine-1-lxc-16  juju-machine-1-lxc-18  juju-machine-1-lxc-20
[16:00] <johnmc> just the healthy LXC containers
[16:03] <rick_h_> johnmc: what version of juju are you on? There were a bunch of lxc issues that have gotten fixed in recent weeks
[16:03] <johnmc> I would have thought that there'd be at least some activity in /var/log/juju/ in response to a request for a new LXC. Nothing happens there at all.
[16:04] <natefinch> johnmc: there definitely should be some output at least, when doing add-machine
[16:04] <johnmc> juju -> 1.20.7-0ubuntu1~14.04.1~juju1
[16:07] <johnmc> rick_h_: Is there a more recent version (after 1.20.7) I should try?
[16:08] <rick_h_> johnmc: nope that should have the fixes I believe so you're good there. Wanted to double check
[16:10] <johnmc> natefinch: As a test, I just did a "cp -a /var/log/juju /var/log/juju-old" on the base machine (1), followed by "juju add-machine lxc:1". I then waited a minute and ran "diff -uNr juju-old/ juju/" on the base machine. No logging had occurred.
[16:10] <rick_h_> johnmc: I think the juju logs are in a .juju directory for local stuff? /me tries to double check.
[16:10] <rick_h_> johnmc: so logs are in /home/rharding/.juju/local/log
[16:10] <rick_h_> where rharding is your username on the host machine
[16:10] <rick_h_> johnmc: and there's the all-machines.log along with per machine/unit logs there.
[16:11] <natefinch> rick_h_: it's maas, not local
[16:11] <rick_h_> hatch: natefinch oh, sorry. /me totally missed that part
[16:12]  * hatch pokes head in
[16:12] <natefinch> johnmc: if you do juju add-machine lxc:1 --debug --show-log  ... what does it print out?  It sounds like the commands aren't even making it to the server for some reason
[16:12] <hatch> rick_h_: did you mean to ping someone else? :)
[16:13] <marcoceppi> arosales: for github reviews
[16:13] <rick_h_> hatch: heh, I was starting to ping you for a different reason
[16:13] <hatch> oh haha
[16:13] <johnmc> natefinch: http://pastebin.com/sJ49b6BY
[16:14] <johnmc> says it's created
[16:15] <johnmc> says it's falling back to 1.18!
[16:21] <arosales> marcoceppi: ack, any protips?
[16:22] <marcoceppi> arosales: yeah, making a screenshot
[16:23] <marcoceppi> well, shutter keeps crashing
[16:23] <marcoceppi> when making comments, make them on the Files Changed tab
[16:23] <marcoceppi> that way they're associated with the merge request and not directly on the branch
[16:23] <marcoceppi> as the merge request is iterated upon with the feedback it'll close comments that are no longer up to date
[16:24] <marcoceppi> arosales: so at the bottom https://github.com/juju/docs/pull/181 you can see how your comments are still shown but mine are marked as outdated
[16:24] <marcoceppi> even though chuck addressed all the comments in the merge
[16:36] <johnmc> natefinch: I've updated to the latest juju-core on my juju -agent machine (machine 0), and tried again. I get the same log output as before ( http://pastebin.com/UbzN3mbY ).  Any idea where I go from here?
[16:38] <johnmc> natefinch: incidentally I can make as many LXC containers on machine "3" as I like. Only machine LXC creation attempts on "1" fail silently.
[16:48] <johnmc> Does anyone have any tips about where I should be looking for clues? Apart from the debug output I've shown ( http://pastebin.com/UbzN3mbY ) , there is no logging evidence I can find that sheds light on this.
[16:49] <johnmc> My request for an LXC container is disappearing into a black hole.
[17:08] <natefinch> johnmc: oh crap
[17:09] <natefinch> johnmc: I think we have a 1.18<->1.21 bug
[17:09] <natefinch> johnmc: we just fixed it last niedbalski
[17:09] <natefinch> johnmc: last night... heh trying auto-complete mid-sentence is not exactly what I wanted
[17:10] <natefinch> johnmc: although that doesn't explain why it would work on one machine and not the other... nevermind.
[17:11] <natefinch> johnmc: if you do the same command with --debug --show-log on machine 3 (where it works), do you get different output
[17:11] <natefinch> ?
[18:21] <johnmc> natefinch: Sorry, had to go away for a bit. This is the output when creating on machine 3: http://pastebin.com/scsj1Jzb . The machine was successfully created in less than 2 minutes.
[18:25] <natefinch> johnmc: hmm.... weird
[18:27] <johnmc> natefinch: Looking at the two base systems, they both have /var/lib/juju/tools/1.18.4-trusty-amd64 on them. Same version on both.
[18:34] <natefinch> johnmc: when you do add-machine to lxc:3, do you get output in the all-machines log that you don't see when you do it for lxc:1?  That seems like the most interesting place to start right now
[18:39] <weblife> I am using the GUI for the first time to set up an environment I created, hosted on the bootstrap node.  Did the drag of a zip, committed the changes and launched the instance.  I had an install error and want to ssh in but I am getting an error.  Could this be because I am using the GUI for launching the instance?
[18:41] <johnmc> natefinch: There's no all-machines.log on my workstation, but there is on my maas-agent machine. There is a block out log output *after* the new LXC machine was created on machine 3, but nothing that coincided with the actual request. This is the output: http://pastebin.com/fmX7bP9H
[18:42] <johnmc> natefinch: so, I suppose the behaviour is consistent across machines 1 & 2, in that you only get log output after a successful lxc machine creation.
[18:42] <natefinch> johnmc: sorry, yeah, I meant on the maas-agent machine.   You should get logs about things like the API being accessed
[18:44] <johnmc> natefinch: I get no logging whatsoever in response to the request.
[18:44] <johnmc> natefinch: only success creates any log output
[18:46] <weblife> nevermind I figured it out
[18:57] <natefinch> johnmc: I'm bringing up an environment of my own so I can double check some stuff, but realized my local environment was kinda messed up.  one is coming up now.
[19:06] <johnmc> natefinch: something interesting is going on with machine 1. Right now when I run "juju stat" it says "agent-state: down" for machine 1. I had this before an rectified it by restarting the juju daemon on machine 1. Strange this is that it was down earlier when I posted my initial stat output ( http://pastebin.com/5b4D3d23 ).
[19:07] <johnmc> natefinch: correction - it was *not* down earlier
[19:08] <natefinch> johnmc: yeah, something is wonky with that machine
[19:11] <johnmc> natefinch: It's the total lack of any log output that gets me. I've just restarted the jujud on machine 1, and it's up again. I then tried to create yet another lxc machine on there and see no new log output on eith the maas-agent, or machine 1.
[19:24] <natefinch> johnmc: can you run    sudo grep 1/lxc/30 /var/log/juju/all-machines.log      on that base maas-agent machine?
[19:32] <johnmc> natefinch: # sudo grep 1/lxc/30 /var/log/juju/all-machines.log grep: /var/log/juju/all-machines.log: No such file or directory
[19:32] <johnmc> natefinch: all-machines is only on maas-agent. Grepping the all-machines log there shows no log entries
[19:33] <johnmc> root@juju-agent:~# grep 1/lxc/30 /var/log/juju/all-machines.log \n root@juju-agent:~#
[19:34] <johnmc> natefinch: It's basically what I've been saying all along; the request for a machine vanishes without a trace.
[19:35] <natefinch> johnmc: on the same machine, if you grep for 3/lxc/10  do you get hits?
[19:42] <johnmc> natefinch: nothing for that either
[19:43] <natefinch> johnmc: ok, that's weird, since that's the one that's actually working
[19:44] <johnmc> As I said before, the lack of any logging pre-success is consistent.
[19:44] <themonk> hello
[19:47] <themonk> my juju says a machine is down, i never encounter it before, why is this happening?? and unit on that machine says "agent-state: down , agent-state-info: (started)"
[19:56] <natefinch> themonk: machines go down sometimes.... is the actual machine down, or just juju?
[19:59] <natefinch> johnmc: obviously deploying to lxc containers on machine 1 used to work, since you have some working.  Do you know when it stopped working?
[20:01] <sinzui> natefinch, themonk: if an local lxc env failed to tear down the cruft left behind will prevent new machines for starting
[20:01] <sinzui> natefinch, themonk I know there is a juju plugin that will clean the machine
[20:01]  * sinzui looks for doc about how to clean
[20:02] <natefinch> sinzui: johnmc is having a problem added new lxc containers to maas instances.  Not sure about themonk's problem yet
[20:02] <natefinch> sinzui: or rather... he can deploy lxc containers to one maas machine but not another
[20:03] <sinzui> I have no lxc maas experience johnmc  but there were bugs about the network the machine might give the lxc
[20:04] <natefinch> johnmc: is this a production environment?  Would you be willing to try upgrading the environment to 1.20.7?
[20:04] <johnmc> natefinch: The first time it failed I was trying to install a haproxy charm to both machine 1 & 3 at petty much the same time (yesterday). Machine 3 succeeded, and nothing happened on machine 1. No idea why.
[20:04] <sinzui> themonk, http://pastebin.ubuntu.com/8427862/
[20:04]  * sinzui looks for lxc maas bugs
[20:05] <johnmc> natefinch: In the end I realised I should have been using hacluster, but that realisation came long after the failure became apparent.
[20:05] <johnmc> natefinch: It's not in production yet. I'll happily try anything.
[20:06] <johnmc> natefinch: Is there a doc explaining what I need to do, or is there just a simple command?
[20:07] <natefinch> sinzui: juju upgrade-juju should just work, if he's on 1.18.4, right?
[20:08] <natefinch> sinzui: (and using 1.20.7 client)
[20:09] <natefinch> johnmc: in theory "juju upgrade-juju" should just work, because you're running a newer stable version of the juju client, and the server is running an older stable  version of the server.  But I'd wait for the go-ahead from sinzui.  He's our QA head, and does about 1000x as many upgrades as I do.
[20:11] <johnmc> natefinch: thanks. I'll check back in a few minutes.
[20:17] <themonk> natefinch, sinzui, thanks for response :) machine is up, one thing is that its in a virtualbox on windows machine
[20:18] <themonk> and its in laptop
[20:45] <johnmc> sinzui: Am I safe to run a "juju upgrade-juju" using a 1.20.7 client with 1.18.4 servers?
[20:45] <sinzui> johnmc, yes. We wouldn't release it if it wasn't safe
[20:49] <johnmc> sinzui: looks like I'm trapped due to my broken (pending) lxc machines. Erro message: ERROR some agents have not upgraded to the current environment version 1.18.4: machine-1-lxc-22, machine-1-lxc-23, machine-1-lxc-24, machine-1-lxc-25, machine-1-lxc-26, machine-1-lxc-27, machine-1-lxc-28, machine-1-lxc-29, machine-1-lxc-30, machine-1-lxc-31, machine-1-lxc-32
[20:49] <johnmc> sinzui: Those are the broken LXCs I've been discussing with natefinch.
[20:50] <johnmc> sinzui: they are the reason I'm trying the upgrade
[20:51] <sinzui> natefinch, you cannot upgrade while they are broken
[20:51] <sinzui> johnmc, Juju will queue the upgrade until all the machine and unit agents call home and report they are healthy
[20:52] <sinzui> johnmc, I know this because I have an arm64 instance that can go down. when it comes back, the upgrades take place
[20:53] <sinzui> johnmc, I am still reviewing the release. when I am done in about 30 minutes, I can return to the lxc maas bug list to find a solution to the problem
[21:02] <bic2k> natefinch and sinzui: I am literally having the same problem right now. Same version 1.18.4 for 1.20.7
[21:03] <bic2k> let me know if I can provide any details to help
[21:06] <bic2k> I lied, we are on 1.18.1 in this cluster
[21:13] <sinzui> bic2k, what do you see with "juju --show-log upgrade-juju" output will show which versions are available. We expect 1.18.4 for clouds with public access,
[21:13] <sinzui> bic2k, you can also be explicit about upgrade versions "juju --show-log upgrade-juju --version=1.18.4"
[21:14] <bic2k> sinzui: no matching tools avaliable?
[21:14] <sinzui> bic2k, explicit within reason. Juju has some internal rules about what it thinks I can upgrade too and will look for a match
[21:15] <sinzui> bic2k, does your env use public streams such as streams.canonical.com?
[21:15] <bic2k> sinzui: good question, thats no terminology to me. Where do I look?
[21:16] <sinzui> bic2k, run "juju get-env tools-metadata-url".  empty means default to streams.canonical.com
[21:16] <bic2k> sinzui: empty it is
[21:17]  * bic2k isn't sure why Yoda said that
[21:17] <sinzui> bic2k, lets ask juju to be explicit about what it is doing. can you paste the output of
[21:17] <sinzui> juju metadata validate-tools
[21:18] <bic2k> sinzui: local tools are on 1.20.7 right now, command metadata isn't around anymore right?
[21:18] <sinzui> bic2k, juju doesn't use your client unless you exploit a developer hack called --upload-tools
[21:20] <bic2k> sinzui: says the command is not found
[21:20] <sinzui> bic2k, are you on mac or windows?
[21:20] <bic2k> sinzui: mac
[21:20] <sinzui> bic2k, well good news for you. I promised to try to make the metadata plugin for mac today when I make the 1.20.5 binaries
[21:21] <sinzui> but that requires me to reboot my machine into os x
[21:21] <sinzui> bic2k, are you in a private cloud?
[21:21] <bic2k> sinzui: public?
[21:22] <sinzui> bic2k, aws, hp, azure, joyent?
[21:22] <sinzui> or your own openstack
[21:22] <bic2k> sinzui: lol, this is a cluster on aws. Been active since 0.12.1
[21:22] <sinzui> bic2k, okay, so I think we need to ask juju about the old urls...
[21:23] <sinzui> bic2k: juju get-env tools-url
[21:23] <bic2k> sinzui: empty
[21:23] <bic2k> want me to gist the whole env without secrets?
[21:26] <sinzui> bic2k, I am not sure. I just switch to my aws env. and it shows sensible answers
[21:27] <bic2k> perhaps I got caught in some old upgrade issue with juju in the 0.18 series?
[21:27] <bic2k> sinzui: been looking in the bugs/issues but nothing related so far
[21:27] <sinzui> bic2k, once you run upgrade-juju, the action is queuesd. juju wont let us run it again to see its decision
[21:35] <bic2k> sinzui saves the day. Thanks again. Solution was setting my tools-metadata-url to https://streams.canonical.com/juju/tools/
[21:36] <sinzui> bic2k, okay, then I think we have learned that old envs may not get the stream url updated during upgrades. 1.18 requires it and a bootstrap will set it
[21:36] <sinzui> bic2k, I will post this issue for others
[21:37] <sinzui> bic2k, so now I suspect "juju run" wont work for you because it is a known failure for upgrades
[21:37]  * sinzui looks for bug/work around
[21:40] <sinzui> bic2k, I think you are affected by bug 1353681
[21:40] <mup> Bug #1353681: juju upgrade-juju --upload-tools to 1.18.4 fails to provide juju-run <canonical-is> <run> <juju-core:Triaged> <https://launchpad.net/bugs/1353681>
[21:41] <sinzui> oh, and so  my juju-ci3 env it seems
[21:41] <bic2k> mup: close, but this was public cloud and not using --upload-tools. No errors related to permissions either.
[21:41] <mup> bic2k: I apologize, but I'm pretty strict about only responding to known commands.
[21:42] <bic2k> sinzui: having an env to reproduce on can go a long ways in figuring it out.