[08:23] <pitti> hello
[08:24] <pitti> so several minutes ago I did "juju destroy-service autopkgtest-cloud-worker", and it did destroy the autopkgtest-cloud-worker/0 instance and the associated machine
[08:25] <pitti> but "juju status" still keeps "autopkgtest-cloud-worker" with a status of "life: dying", and it never goes away
[08:25] <pitti> http://paste.ubuntu.com/14485499/
[08:26] <pitti> is that because some subordinate charms still reference this?
[08:26] <pitti> I can't redeploy the charm while it's in that state
[08:30] <pitti> I already tried "juju destroy-relation ksplice autopkgtest-cloud-worker", and same for landscape-client, but that doesn't help
[08:42] <pitti> I tried destroying the landscape-client and ksplice services too, but still didn't help
[08:53] <marcoceppi> pitti: taking a look
[08:53] <pitti> marcoceppi: axino suggested to do "sudo restart jujud-machine-0" on the 0 machine, but that didn't help either
[08:54] <marcoceppi> pitti: that's alos a bit...drastic
[08:55] <marcoceppi> pitti: it's odd, because it's referencing ksplice and landscape but I don't see services with those interfaces deployed
[08:55] <pitti> marcoceppi: well, they are subordinate charms -- they get deployed on each "real" service
[08:55] <pitti> they are not standalone services
[08:55] <marcoceppi> pitti: sure, but they're not in status at all
[08:56] <marcoceppi> pitti: right, but th relations for that service are listed with no units
[08:56] <marcoceppi> pitti: it seems the service isn't being recycled, because of this relation
[08:56] <marcoceppi> because the machine is basically gone, no units exist, etc
[08:57] <marcoceppi> does juju remove-relation autopkgtest-cloud-worker ksplice do anyhting?
[08:59] <pitti> marcoceppi: nothing visible at all
[08:59] <marcoceppi> pitti: weird. obviously this isn't supposed to happen
[08:59] <pitti> marcoceppi: well, they are in status, you see e. g. landscape-client/6
[08:59] <marcoceppi> pitti: right, but not on the autopkgtest service
[09:00] <pitti> yeah, as that's gone after destroy-service
[09:00] <marcoceppi> pitti: does `juju destroy-service --force autopkgtest-cloud-worker` do anything?
[09:00] <marcoceppi> it's just stuck in dying, which usually means something as failed, a relation or something else
[09:00] <pitti> marcoceppi: there's no such option
[09:00] <pitti> juju --version: 1.24.7-trusty-amd64
[09:01] <marcoceppi> :\
[09:01] <pitti> it's actually quite simple for me to destroy and re-deploy the whole env, I was just wondering as this did work before
[09:02] <pitti> I re-deployed the worker service individually two or three times
[09:03] <marcoceppi> pitti: it should work, something is stuck where Juju says it's not ready to reap the service because there's still an action pending against this
[09:03] <marcoceppi> you see this a lot when relations fail on destroy, or stop hook fails
[09:03] <marcoceppi> but I can't see exactly what it's stuck on, and since it's a 1.22 version I don't remember if there were any oddities in that release
[09:04] <pitti> 1.24
[09:04] <pitti> marcoceppi: anyway, I'll re-do the whole thing; thanks for looking!
[09:05] <marcoceppi> pitti: you may have 1.24 locally, that deployment is agent-version: 1.22.6. either way, good luck!
[09:05] <pitti> marcoceppi: ah, ok
[09:05] <pitti> marcoceppi: whatever is in prodstack
[09:06] <pitti> marcoceppi: but maybe they updated to 1.24 by now, and with re-deploying I'll get that too, I'll see
[09:07] <marcoceppi> \o/
[09:10] <pitti>     agent-version: 1.24.7
[09:10] <pitti> marcoceppi: seems so
[09:11] <pitti> marcoceppi: so next time I run into errors I'll have a less outdated version
[09:11] <marcoceppi> pitti: awesome, it's easier to support 1.24.7 since 1.25.0 is latest
[09:53] <wesleymason> Anybody seen an issue were a reactive built charm during upgrade/install continually loops around uninstalling and reinstalling wheels?
[10:04] <marcoceppi> wesleymason: interesting.
[10:04] <cory_fu_> Can't say that I have.
[10:04] <marcoceppi> wesleymason: do you have a link to the charm? more importantly can you show me the hooks/upgarde-charm file?
[10:04] <wesleymason> marcoceppi: charm is https://github.com/1stvamp/juju-errbot (WIP)
[10:05] <marcoceppi> wesleymason: I think I know the issue, we can patch in a few mins
[10:05] <wesleymason> marcoceppi: upgrade-charm hook: http://pastebin.ubuntu.com/14485929/
[10:06] <marcoceppi> wesleymason: thought so, thanks, we'll patch this quickly
[10:06] <wesleymason> marcoceppi: cheers 😃
[10:15] <marcoceppi> wesleymason: when this lands in a few mins, `charm build` again https://github.com/juju-solutions/reactive-base-layer/pull/23
[10:20] <wesleymason> marcoceppi: aha, tvm
[11:30] <wesleymason> perhaps marcoceppi's PR could get a bit of love? https://github.com/juju-solutions/reactive-base-layer/pull/23 - in order to keep working on my charm I keep having to manually copy everything in and void a proper rebuild
[11:30] <wesleymason> s/void/avoid
[11:39] <tiagogomes__> I am correct to assume the JuJu State Machine VM, needs to have access to the OpenStack endpoint APIs? If I use floating IPs, does the uJu State Machine communicates with other VMs using floating IPs or in the tenant network?
[11:44] <marcoceppi> wesleymason: I just poked cory_fu_
[11:45] <wesleymason> marcoceppi: ta
[11:45] <cory_fu_> marcoceppi: lgtm, but let me test it.  ;)
[11:46] <marcoceppi> psh, tests ;)
[11:48] <wesleymason> In other news I can now deploy errbot with juju 🙌 (minus nrpe, http and the ability to install from a wheelhouse rather than pypi)
[12:10] <cory_fu_> wesleymason: Ok, the wheelhouse loop on upgrade-charm is fixed, if you rebuild your charm.
[12:44] <wesleymason> cory_fu_: marcoceppi: cheers guys
[13:46] <D4RKS1D3> Hi, I removed a charm via command line but in the juju-gui i see the charm, which it is the way to solve this problem? thanks in advance
[13:51] <lazyPower> D4RKS1D3  whats the current status when you look at the service in `juju status`?
[13:53] <D4RKS1D3> environment: maas machines: {} services: {}
[13:53] <D4RKS1D3> juju resolved -r neutron-api/0 ERROR unit "neutron-api/0" not found
[13:54] <lazyPower> D4RKS1D3 and the GUI still has the service as listed? Reloading the browser tab doesn't correct itself?
[13:54] <D4RKS1D3> if I reload the page continue neutron-api service
[13:54] <D4RKS1D3> but not the uni
[13:55] <lazyPower> D4RKS1D3 That sometimes happens when you destroy a machine out from under a service, and dont remove the service... but if your juju status output says there's nothing registered, thats bug worthy
[13:56] <D4RKS1D3> http://s22.postimg.org/4aqldlaz5/Screenshot_130116_13_53_43.png
[13:56] <lazyPower> D4RKS1D3 can you psatebin me the output from juju status --format=tabular ?
[13:56] <D4RKS1D3> of course
[13:58] <D4RKS1D3> lazyPower, http://paste.ubuntu.com/14487078/
[13:59] <lazyPower> D4RKS1D3 ok it appears you have removed the machine/unit, but not the service
[13:59] <lazyPower> D4RKS1D3 juju destroy-service netron-api
[13:59] <lazyPower> *neutron-api
[14:00] <D4RKS1D3> I wrote this command, yes
[14:01] <lazyPower> seems stuck huh? try adding a unit to neutron-api, then running juju destroy-service neutron-api
[14:01] <D4RKS1D3> Okey lazyPower
[14:02] <D4RKS1D3> juju add-unit neutron-api --to lxc:3 ERROR cannot add unit 1/1 to service "neutron-api": cannot add unit to service "neutron-api": service is not alive
[14:03] <lazyPower> progress, seems that the service itself is stuck without a unit showing in status.. but somethings lingering in the state server keeping it around.
[14:04] <lazyPower> D4RKS1D3 Can i get you to file a bug about this? https://launchpad.net/juju-core/  https://login.launchpad.net/LfAACUW1CApYLLfk/+decide
[14:04] <D4RKS1D3> Of course
[14:04] <lazyPower> include the status output, gui screenshot, and the all-machines.log, steps to reproduce
[14:05] <D4RKS1D3> Yes
[14:05] <D4RKS1D3> what do you need
[14:05] <lazyPower> > status output, gui screenshot, and the all-machines.log, steps to reproduce
[14:06] <tiagogomes__> Hi, can someone reply to this: I am correct to assume the JuJu State Machine VM, needs to have access to the OpenStack endpoint APIs? If I use floating IPs, does the JuJu State Machine communicates with other VMs using floating IPs or in the tenant network?
[14:06] <lazyPower> tiagogomes__ the tenant network i do believe
[14:06] <D4RKS1D3> I think the same, only the tenant network
[14:07] <lazyPower> tiagogomes__ juju uses the private interface for communicating with the nodes. I'm pretty sure floating-ip only effects the public interface
[14:07] <lazyPower> D4RKS1D3 oh lol i just realized i pasted a login link to you. the intended link was launchpad.net/juju-core
[14:08] <tiagogomes__> ok. I see. And the JuJu state machine needs to call the OpenStack APIs right? Or is the client that does that?
[14:08]  * tiagogomes__ is trying to sort out network requirements
[14:08] <lazyPower> The state server makes requests to the OS API's
[14:09] <lazyPower> depending, juju deploy openstack, juju deploy an environment into that openstack. So you'e got the idea of your cloud, and as beisner  says, your "undercloud"
[14:09] <lazyPower> so it depends on which layer of cloudy goodness you're talking about :)
[14:09] <D4RKS1D3> lazyPower, I do not see the the link in the juju-core
[14:10] <lazyPower> D4RKS1D3 https://bugs.launchpad.net/juju-core/+filebug
[14:10] <tiagogomes__> I see. I am talking about JuJu bootstrapped on OpenStack. And The JuJu gui makes requests to the state server right? It doesn't call the OpenStack APIs
[14:10] <lazyPower> Correct
[14:11] <lazyPower> all juju interaction is routed through the model controller (state server)
[14:12] <tiagogomes__> tvm lazyPower!
[14:12] <lazyPower> np tiagogomes__
[14:17] <D4RKS1D3> lazyPower, all-machines.log is a huge file! hahahaha
[14:17] <beisner> o/ hi all
[14:17] <D4RKS1D3> hi beisner
[14:22] <beisner> is that mr beedy?
[14:38] <lazyPower> i dont think so bdx is beedy
[14:50] <beisner> hi D4RKS1D3 ;-)
[15:38] <beisner> lazyPower, chipping away at the rvw Q.  i don't have merge perms on mysql.  can you do the honors on this?  https://code.launchpad.net/~barryprice/charms/trusty/mysql/add-innodb_file_per_table/+merge/281398
[15:38] <jcastro> hey lazyPower
[15:38] <jcastro> so like marco told me with him the LXD provider breaks like every 6 bootstraps or so
[15:39] <jcastro> but I have not had issues
[15:39] <jcastro> what would be a good way to just automate having my laptop fire up workloads in a loop?
[15:39] <jcastro> I figure, let it run for like 8 hours and see what happens
[15:45] <jose> beisner: did you get it merged already?
[15:46] <beisner> hi jose!  nope, but it's ready to land.
[15:46] <jose> beisner: ok, let me take a look
[15:49] <lazyPower> jcastro cronjob to run a bundletester job?
[15:49] <jcastro> I've not used bundletester before
[15:49] <jcastro> is there a tldr doc somewhere?
[15:49] <lazyPower> pip install bundletester && bundletester -F -l DEBUG -v in the charmdir
[15:50] <lazyPower> lemme see what we have in the docs
[15:50] <lazyPower> i'm positive we have somem stuff in the dev docs about this too
[15:51] <lazyPower> jcastro https://jujucharms.com/docs/devel/developer-testing#bundletester
[15:51] <jcastro> ack, thanks
[15:51] <jose> beisner: and, pushed!
[15:51] <lazyPower> ta jose
[15:52] <jcastro> seems to still use juju-deployer
[15:52] <beisner> jose, great, thanks.  i'll send a review mail to the list.
[15:52] <lazyPower> also, o/ jose
[15:52] <jose> hey, lazyPower!
[15:52] <lazyPower> jcastro thats probably the case. juju deploy is too new to have it consumed in our testing infra already
[15:52]  * jcastro nods
[15:52] <lazyPower> does the lxd provider not work with juju-deployer?
[15:53] <jcastro> I am not sure yet, I was just reading the github page on bundletester
[15:57] <lazyPower> beisner have i introduced you to my personal assistant, charmbot 5000? aka jose
[15:57] <jose> >.>
[15:57] <lazyPower> xD
[15:59] <jcastro> huh, juju-jitsu is still in xenial
[16:01] <lazyPower> whoa
[16:01] <lazyPower> i thought that was deprecated as of precise
[16:01] <jcastro> I'll find out how to remove it
[16:02] <jose> jcastro: on main?
[16:02] <jose> or, I mean, universe?
[16:03] <lazyPower> hey jcastro, i have a potential TC attendee to the charmer summit. Think we have some space for floating heads in a box on a laptop?
[16:03] <lazyPower> tc = teleconference
[16:04] <jose> jcastro: you'll need to get a MOTU to remove the package
[16:07] <jcastro> https://bugs.launchpad.net/ubuntu/+source/juju-jitsu/+bug/1533738
[16:07] <mup> Bug #1533738: Remove juju-jitsu package from Xenial <juju-jitsu (Ubuntu):New> <https://launchpad.net/bugs/1533738>
[16:07] <jcastro> they told me what to do
[16:07] <jcastro> we just file a bug and subscribe ubuntu-archive with an explanation
[16:32] <beisner> lazyPower || jose - can you review/land this?  it's a c-h resync to enable openstack mitaka (ceilometer + mongodb) deployability.  https://code.launchpad.net/~1chb1n/charms/trusty/mongodb/ch-sync-mitaka/+merge/282211
[16:33] <lazyPower> jose wanna take that one?
[16:33] <jose> beisner: I have to run an errand right now - if lazy hasn't done it by when I'm back I'll do it
[16:33] <jose> lazyPower:
[16:33] <jose> ^ *
[16:33] <lazyPower> sound good
[16:33] <jose> cool
[16:33] <lazyPower> beisner in a meeting, gimme a few and i'm on it
[16:33] <beisner> thx guys.  fwiw, the code it's pulling in has already been reviewed and landed in lp:charm-helpers.
[16:34] <jacekn> hello. I am trying to write an amulet test for a simple subordinate. It uses "juju-info" relation. I am trying to relate it with cs:ubuntu charm like this: "cls.deployment.relate('ubuntu:juju-info', 'collectd:juju-info')"
[16:35] <jacekn> I am getting this: http://pastebin.ubuntu.com/14488256/
[16:36] <jacekn> any ideas what I am doing wrong?
[16:36] <beisner> jacekn, are you able to make the same relation on those services outside of the test?  ie.     juju add-relation X Y
[16:36] <jose> jacekn: not sure on this, but does your charm provide juju-info?
[16:36] <jose> like, explicitly stated?
[16:36] <jacekn> beisner: only if I do "juju add-relation ubuntu collectd"
[16:37] <jacekn> beisner: so it seems to me that the amulet depends on juju-info being in the metadata.yaml which it probably shouldn't
[16:38] <beisner> jacekn, yeah i've not looked at the code, but i suspect it just parses metadata.yaml to populate valid interfaces.
[16:38] <jacekn> see this: http://pastebin.ubuntu.com/14488278/
[16:39] <jacekn> so how should I test my subordinate?
[16:39] <beisner> given the empty dict, i bet you're right. {   u'Error': u'no relations found', u'RequestId': 1, u'Response': {   }}
[16:39] <beisner> i'd say it's in python-jujuclient
[16:40] <beisner> or hrm.  that explicit juju add-relation fail takes me back to:  juju-info is somehow super-special
[16:40] <jacekn> even juju itself can't add that relation (ubuntu:juju-info collectd:juju-info)
[16:44] <lazyPower> jacekn correct, juju-info relations are bad
[16:45] <lazyPower> using it as the implicit interface on a scope: container relation is ok
[16:45] <lazyPower> but not as the relation name
[16:45] <lazyPower> beisner ^
[16:46] <lazyPower> jacekn swap that relation to be something like: "metric-host:   interface: juju-info  scope: container" and you should clear up that error
[16:47] <icey> in developing a layer, not a top level layer, is it OK to use any of the @hook decorators?
[16:49] <jacekn> lazyPower: you mean in my subordinate's metadata.yaml?
[16:50] <icey> also, would it be a good idea to (if possible) make a layer that is not intended to be the top layer deployable?
[16:50] <icey> ie: calling charm build on the layer would make a deployable charm
[16:50] <lazyPower> jacekn correct
[16:50] <lazyPower> icey you can
[16:50] <lazyPower> icey you just cant mix @hook and @when decorators
[16:51] <lazyPower> icey however we do prefer that you not use the @hook decorators unless they are absolutely necessary. Like set up a method that uses/provides a synthetic state
[16:51] <lazyPower> and use those states to drive, as thats more natural when charming with consuming layers
[16:51] <icey> I'm thinking more about the install hook :)
[16:51] <lazyPower> its odd to try to intercept @hooke('config-changed')  in 3 layers, vs doing something like @when_not('dependency.installed')
[16:52] <icey> everything else I'm fine pushing to reactive state but can't figure out a non-ridiculous way to handle the install
[16:52] <lazyPower> well
[16:52] <lazyPower> considering states run until their state change occurs in the bus coupled with a guarding @when_not()
[16:52] <lazyPower> you can set the state you want to imply install, and @when() on your other 3 or 4 method decorators
[16:56] <jacekn> lazyPower: that did not help: http://pastebin.ubuntu.com/14488410/
[16:57] <beisner> lazyPower, can you re-trigger tests for http://review.juju.solutions/review/2394 ?
[16:58] <lazyPower> beisner aws and lxc re-added to the queue
[16:58]  * lazyPower circles back
[16:58] <beisner> lazyPower, thx
[16:58] <lazyPower> inc rev/merge on above link
[17:00] <icey> lazyPower: how would I get my needed state set?
[17:00] <lazyPower> icey charms.reactive set_state('thing.amabob')
[17:00] <icey> in an @hook('install')?
[17:00] <lazyPower> sure
[17:00] <lazyPower> from charms.reactive import set_state
[17:01] <lazyPower> set_state('thing.amabob')
[17:01] <icey> I know how to set states, wondering what the recommended way to replace an install hook would be
[17:01] <lazyPower> oh
[17:01] <lazyPower> @when_not('thing.amabob')
[17:01] <lazyPower> def do_something()
[17:01] <lazyPower> # do something here
[17:01] <lazyPower> set_state('thing.amabob')
[17:01] <lazyPower> then in followign methods
[17:01] <lazyPower> @when('thing.amabob')
[17:01] <jose> lazyPower: on it?
[17:01] <lazyPower> do_something() will trigger in that first sweep of the reactive bus
[17:02] <lazyPower> jose already merged
[17:02] <lazyPower> just need to push
[17:02] <lazyPower> actually the merge is hng up
[17:02] <jose> lazyPower: gotcha
[17:02] <lazyPower> if you wanna do it go for it
[17:02] <jose> bzr push :parent
[17:02] <jose> ^^ copy and paste
[17:02] <lazyPower> nah its stuck on the merge dude
[17:02] <lazyPower> $ bzr merge lp:~1chb1n/charms/trusty/mongodb/ch-sync-mitaka
[17:02] <lazyPower> sitting here on this, cycling
[17:02] <jose> ok let's see
[17:02] <jose> as long as it's not a fat charm I'm good
[17:03]  * icey lunches
[17:05] <jose> beisner: merged
[17:06] <beisner> thanks jose
[17:06] <jose> np
[17:40] <beisner> jose o/   got one hot off the press.  https://code.launchpad.net/~ajkavanagh/charms/trusty/mongodb/fix-unit-test-lint-lp1533654/+merge/282472
[17:41] <beisner> ie.  some old existing lint in the unit test file just crossed the failure line in whatever bleeding edge version of flake8 gets pulled from pypi
[17:42] <jose> beisner: lucky you, I still have that directory open
[17:42] <beisner> woot
[17:45] <jose> and landed
[17:57] <beisner> thanks again, jose :)
[18:01] <jose> np
[18:11] <icey> lazyPower: got a few minutes to chat more about the layers / reactive stuff, maybe on a hangout?
[18:30] <mbruzek> Hey guys I just reviewed a charm that uses SFTP and no crypto verification.  Is that OK?
[18:31] <mbruzek> Does using sftp remove the need to do cryptographic verification of a payload?
[18:32] <marcoceppi> mbruzek: ehhhhhhhhh
[18:32] <marcoceppi> mbruzek: it's on the fence, I'd mail the list
[18:50] <lazyPower> icey : hey sorry i stepped out for lunch
[18:50] <lazyPower> marcoceppi are we doing standup in 10?
[18:50] <icey> no worries lazyPower :) I figure IRC is async
[18:51] <marcoceppi> lazyPower: probably, yes
[18:51] <marcoceppi> moving to better wifi
[18:54] <lazyPower> icey But i can help out of band, whats up?
[18:54] <icey> just trying to wrap my head around best practice for intermediate layers and hooks
[18:55] <lazyPower> icey well i prototyped some of what this looks like elsewhere
[18:55] <lazyPower> icey this is a sandwich layer with a short lifespan, it will be deprecated in favor of layer-docker-libnetwork
[18:56] <lazyPower> https://github.com/chuckbutler/flannel-layer/blob/master/reactive/flannel.py
[18:56] <lazyPower> but i dont bind to *any* of the default hookenv hooks
[18:56] <lazyPower> i'm sniffing states off the base layer and setting states for hte top layer to consume via unitdata.kv()
[19:13] <icey> marcoceppi: can you let me know how to depoloy a local charm with a bundle
[19:15] <marcoceppi> icey: set JUJU_REPOSITORY and set charm: local:trusty/charm
[19:15] <icey> thanks, yeah the bundle export from juju gui gave me an error with the charm number after
[21:45] <bdx> hey whats up guys? Is there a charmhelper function, or best practice anyone knows of to generate self-signed certificates other then using subprocess and pipes?
[21:52] <marcoceppi> bdx: you should talk to lazyPower
[21:52] <lazyPower> bdx hi there
[21:52] <marcoceppi> o/ arosales
[21:52] <marcoceppi> bdx: and mbruzek
[21:52] <lazyPower> bdx have you looked at the TLS layer in interfaces.juju.solutions?
[21:52] <bdx> oooh ... I just realized you can pass a -config option to a file
[21:53] <bdx> lazyPower: I was looking at the way its done in the apache2 charm
[21:53] <lazyPower> bdx nope!
[21:53] <lazyPower> we have new stuff
[21:53] <bdx> lazyPower: nice, I'll check it now, thanks
[21:53] <lazyPower> bdx https://github.com/mbruzek/layer-tls
[21:54] <lazyPower> the tls layer is only peer capable so its not server/client aware yet
[21:54] <lazyPower> but getting there
[21:54] <lazyPower> sets up PKI, each unit generates a CSR, the leader signs it and hands back the certificate
[21:54] <lazyPower> consume, easy self signed pki
[21:55] <bdx> this is great!
[21:55] <bdx> I want to use it now
[21:55] <lazyPower> do it!
[21:55] <lazyPower> we want bugs / feedback / layers to use this
[21:55] <lazyPower> bdx we're doing server/client key generation for kubernetes with this and its workign quite well
[21:56] <mbruzek> bdx it is still very new, but please give us feedback
[21:56] <lazyPower> so if you need an example to follow, we have one for ya
[21:56] <lazyPower> i should have the prelim work done w/ swarm to hand over soon'ish too, its about a week behind the k8s refactoring
[21:56] <bdx> lazyPower, mbruzek: entirely, awesome!
[21:57] <mbruzek> bdx we aim to please
[21:58] <bdx> mbruzek: it was a great idea to make this a layer ... every website/api endpoint should/could need this
[21:58] <mbruzek> I know!  Right?
[21:58] <mbruzek> To be honest we stumbled upon that with our kubernetes work
[21:59] <mbruzek> but yeah I want to make it more generic and useful to any charm layer so please advise, open bugs, or sing is praises in #juju
[22:00] <bdx> of course :-)
[22:17] <beisner> mysql charm mitaka prep ready for review/landing.  https://code.launchpad.net/~1chb1n/charms/trusty/mysql/ch-sync-mitaka/+merge/282209   i believe this is the last of the mitaka uca blocker syncs - jamespage || gnuoy
[23:06]  * D4RKS1D3 GoodNight!