[00:40] <sergiusens> ev, not sure of the actual load these days, but the calxeda's that came in sure made it a lot less dramatic for the ex-PS stuff; I do want to say that we use to have a dput per merge into PPAs (binary version of trunk) and were told to stop doing that because we were overloading the system; again this is prior to calxeda
[00:42] <sergiusens> if resource wise is ok, all that would need to be devised is a way to pull out the test results; not that it would be difficult
[01:03] <Mirv> infinity: rmadison reports new mir is in, didrocks suggested to ask you to kick off a new image build?
[01:04] <Mirv> cyphermox: FYI ^
[01:06] <didrocks> ogra_: we can kick an new image (in case infinity can't get it before you are online). The Mir fix is in
[01:06] <Laney> ogra_: infinity: is there any more to that than running the line from crontab?
[01:06] <Laney> (I could do it)
[01:14] <infinity> Laney: Should be just that.  Sorry, was in transit between hotels.
[01:14] <Laney> mmkay
[01:14] <infinity> Laney: You already doing it, or should I?
[01:15] <didrocks> infinity: ok, Laney volonteered :)
[01:19] <Laney> voluntold
[01:21] <plars> ev: We could look at the images on system-image and cdimage for better numbers, but it's generall 1-2 builds/day. We didn't have one today though
[03:18] <plars> in case anyone was waiting around for it, image showed up and tests have started on it - the image ci shouldn't truncate .crash files anymore too
[03:39] <plars> asac, ogra_, rsalveti: gallery passes again \o/
[03:58] <rsalveti> plars: :D
[08:09] <ogra_> plars, yay
[08:25] <jibel> cyphermox, your latest upload of autopilot 1.3 breaks system upgrade if python-autopilot-strace is installed bug 1246620
[08:30] <sil2100> jibel: got the same issue yesterday when upgrading my desktop
[08:30] <sil2100> cyphermox: ^
[08:43] <sil2100> fginther: hello, poke me back when you're around :)
[10:07] <plars> psivaa, ogra_: did you see that ubuntu-ui-toolkit tests are all failing now?
[10:08] <psivaa> plars: uitoolkit passed in image 9 iirc
[10:08] <ogra_> no, i noticed that maguro had not run gallery when i looked last
[10:09] <ogra_> (but thats a while ago)
[10:09] <psivaa> i re-ran gallery and it passed
[10:09] <ogra_> great
[10:09] <plars> psivaa, ogra_: *sigh* sorry, nevermind me... I've been up since 3am de-flooding my house
[10:09]  * psivaa checks the time in texas 
[10:10] <plars> psivaa: it's 5am now
[10:10] <ogra_> plars, urgh !
[10:10] <plars> psivaa: I saw you restarted unity8 on mako, thanks - it looks promising from what I saw on maguro
[10:11] <psivaa> plars: yea, just reran a few jobs. the passrates are actually good with 9
[10:11] <t1mp> would it be a good idea not to accept a new image if it doesn't pass UITK (trunk) autopilot tests?
[10:11] <plars> psivaa: I saw, awesome
[10:11] <plars> ogra_: psivaa reran that one already on maguro
[10:11] <plars> ogra_: it was good
[10:11] <t1mp> and with "accept a new image" I mean make it the new default image for CI
[10:11] <psivaa> plars: btw good luck with de-flooding and  having a bit of sleep :)
[10:12] <plars> ok, I'm going to go at least lay down for a bit, I think sleep is out of the question at this point
[10:14] <ogra_> oh man
[10:14] <ogra_> good luck
[10:14] <ogra_> psivaa, do we know why unit8 fails on mako ? smells like not unlocked or some such
[10:15] <psivaa> ogra_: not really, we have a better passrate on maguro. unfortunately i dont have a mako to locally test
[10:16] <psivaa> looking through the logs to see anything obvious
[10:20] <psivaa> ogra_: we have unity8 crash there
[10:21] <ogra_> we do on maguro as well though
[10:21] <ogra_> two even
[10:29] <psivaa> yea, lauch_unity is failing, tried looking at the network camera and can't see any movements. that may be pointing that the screen is locked
[10:31] <ogra_> yup
[11:16] <popey> ogra_: no new build yet?
[11:17] <ogra_> hmm ? we have #9
[11:17] <ogra_> was built at 3am UTC
[11:17] <ogra_> (and still didnt pass the unity8 test on mako)
[11:26] <psivaa> untiy8 on mako was tried again after re-flashing, still dint pass.
[11:32] <ogra_> :(
[11:40] <popey> oh, sorry, i thought #9 was the old broken one, not the new broken one ㋛
[11:41] <davmor2> popey: No 8 was the old broken one :)
[11:41] <davmor2> popey: at least apps stay open long enough to do things now :)
[11:50] <ogra_> right, 9 looks pretty good ... except that unity8 thing :/
[11:58] <popey> ogra_: would you recommend upgrading my test phone to #9 or is it unusable?
[11:58] <ogra_> popey, looks fine on maguro is all i can say
[11:59] <ogra_> i wont upgrade my mako now that it replaces my android phone :)
[12:01]  * popey updates
[12:01] <popey> ogra_: next time you use update manager and see new apps, before you install them can you please see if you can reproduce bug 1245890
[12:02] <ogra_> popey, cant confirm, this works for me
[12:02]  * ogra_ usually adds his new apps in the stroe to the launcher ... they all still work fine
[12:19] <popey> hm
[12:20] <sergiusens> popey, that's a dup
[12:22]  * ogra_ shakes his fist at google 
[13:17] <sil2100> fginther: hello! Re-ping ;)
[13:17] <sil2100> fginther: did you see my annoying PM?
[13:19] <fginther> sil2100, sorry, just saw it
[13:19] <fginther> sil2100, one moment
[13:20] <sil2100> Thanks!
[13:39] <boiko> hi guys, I need a new release of lp:history-service, how should I proceed?
[13:41] <cjohnston> boiko: http://paste.ubuntu.com/6292280/
[13:42] <sil2100> boiko: hi!
[13:44] <boiko> cjohnston: hi, so, I have seen those instructions, but there is no link to the Landing Asks sheet
[13:44] <boiko> sil2100: hi!
[13:44] <sil2100> boiko: as it's history-service we're talking about, I would proceed by adding it to the Landing Asks
[13:44] <sil2100> boiko: let me give it to you, but you should actually ask your manager to do that
[13:44] <ogra_> yeah, unlikely you can edit it
[13:44] <sil2100> boiko: https://docs.google.com/a/canonical.com/spreadsheet/ccc?key=0Au6idq7TkpUUdGNWb0tTVmJLVzFZd0doV3dVOGpWemc#gid=1
[13:45] <cjohnston> what sil2100 said
[13:45] <ogra_> after all it is still the same process we use since twoo months though
[13:46] <sil2100> boiko: we can then follow up and get it released in the nearest time - since didrocks is on a sprint I guess it might take till next week to get it released? Depends if the guys in Oakland pick it up or not
[13:46] <boiko> sil2100: oh right, thanks, I am just asking directly cause everybody is at the sprint
[13:46] <ogra_> popey, did you try #9 on mako yet ?
[13:47] <sil2100> boiko: I would try releasing it myself, but I would first need management permission, and they're at the sprint as well...
[13:47]  * ogra_ wonders if unity8 is fine in real life ... compared to the failing tests 
[13:47] <sil2100> boiko: what issues does it fix?
[13:47] <sil2100> ogra_: I can upgrade my device if needed
[13:48] <ogra_> sil2100, why do you need management approval ? if the AP tests pass and you cant see a regression in a manual smoke test it shouldnt be blocked on this
[13:48] <boiko> sil2100: well, the one bit I need released is the pkgconfig support, but there has been other changes there as well (things that were pending from the last cycle)
[13:48] <sil2100> ogra_: I didn't know if that's ok right now - but if you say that, then I guess!
[13:49] <ogra_> well, we shouldnt block on them not being here ... as long as the landing appears safe
[13:49]  * sergiusens thinks that when managers go on sprint they should get some peer+1 assigned
[13:49] <ogra_> (test twice if you feel thats safer :) )
[13:49] <sil2100> ;)
[13:49] <sil2100> In my case even testing thrice won't help
[13:49] <sil2100> :D
[13:50] <sil2100> Anyway, thanks, then I'll look into it and try getting it released
[13:50] <ogra_> well, if it doesnt pass, indeed, keep it til next week
[13:50] <ogra_> but things that pass safely shouldnt be blocked by missing managers
[13:53] <boiko> sil2100: I can ask my manager if that makes you more comfortable, that's fine, the landing is not urgent
[14:06] <popey> ogra_: yeah, it's been fine here
[14:06] <ogra_> thought so
[14:28] <ev> plars: excellent news on no longer truncating crash files
[14:28] <ev> on the frequency of builds, I was referring more to how often we're running dpkg-buildpackage within the CI infrastructure
[15:56] <cyphermox> jibel: well, it's not a breakage in autopilot.
[15:59] <cyphermox> jibel: that said, it's on my list, I'll fix it shortly
[15:59] <cyphermox> sil2100: ^
[15:59] <sil2100> cyphermox: thanks!
[16:04] <ogra_> doanac, hey ...
[16:04] <doanac> ogra_: what's up?
[16:04] <ogra_> doanac, it looks like we have issues with unlocking the screen on mako with the unity8 tests ...
[16:05] <ogra_> psivaa ran them multiple times on image #9 ... he checked with the webcam and apparently the screen doesnt show anything ... popey tested #9 manually and unity8 behaves fine
[16:06] <doanac> ogra_: i don't think we run unlock_screen for unity8. we just kill the shell before running.
[16:06] <doanac> checking now
[16:06] <ogra_> hmm
[16:06] <psivaa> doanac: the issue is only on mako, we are though not 100% but fairly well in maguro
[16:07] <ogra_> well, there seems to not be anything unusual in the logs apparently
[16:07] <psivaa> doanac: lauch_unity is failing
[16:08] <doanac> let me test locally and see what happens
[16:14] <Mirv> sil2100: are the cu2d problems because of autopilot or what's all the red about?
[16:15] <Mirv> sil2100: there seem to have been also multiple manual uploads I guess (yellow)
[16:15] <doanac> ogra_: i'm running the test at home and its failing the same way
[16:16] <ogra_> wow
[16:16] <ogra_> i wonder what makes mako special then
[16:17] <sil2100> Mirv: autopilot was reddish today as well, so I would say it's fifty-fifty - I was mostly dealing with other things today so I didn't administer cu2d in detail
[16:18] <Mirv> sil2100: it's not autopilot tests, the check job looks problematic in general somehow http://10.97.0.1:8080/job/autopilot-trusty-daily_release/281/label=autopilot-nvidia/console
[16:18] <Mirv> lxc problem?
[16:19] <sil2100> Mirv: I was using the autopilot jobs directly today, without cu2d, and just saw that there was a loong stroke of red jobs, but didn't have a moment to look into that
[16:20] <sil2100> Let me also look at that now
[16:20] <sil2100> Mirv: ah, this one - this one I saw happening once, but then next time the machine ran fine
[16:20] <Mirv> sil2100: it seems it's nvidia specific
[16:20] <sil2100> Yes, let me find my job, one moment
[16:21] <Mirv> then under qa there is also a missing package when looking at the intel side
[16:21] <sil2100> Aaah
[16:21] <sil2100> Mirv: ok, my mistake, it was happening all the time on nvidia, so it seems to be b0rken
[16:22] <Mirv> sil2100: can you look at that still, you have much more skillz in the lxc area? :)
[16:22] <Mirv> I'm fixing at least qa package list now
[16:22] <sil2100> Ok, let me take a look then
[16:24] <Mirv> my main problem is that if it complains about a missing container, I wouldn't know immediately where I should see the containers etc.
[16:26] <sil2100> hmm, I might see one of the problems
[16:27] <sil2100> The permissions to the containers directory got changed, not sure if jenkins user can access them this way - so it cannot find them
[16:27] <sil2100> Let me try fixing that
[16:28] <doanac> fginther: the permissions thing ^^^ - weren't you seeing something similar the other day?
[16:29] <sil2100> Mirv: let's run some test now maybe?
[16:29] <Mirv> cyphermox: synced your autopilot manual upload to the branch to make the prepare job work again
[16:30] <Mirv> sil2100: aha.. ok, I'll try
[16:32] <Mirv> sil2100: QA running
[16:32] <Mirv> sil2100: actually there's another check ongoing, nvidia seems to be running
[16:32] <Mirv> let's wait
[16:34] <sil2100> Mirv: keeping my fingers crossed!
[16:34]  * sil2100 wonders if that's the issue
[16:34] <sil2100> We'll see soon enough
[16:34] <cyphermox> thanks
[16:36] <jibel> sil2100, Mirv this is a change in lxc 1.0.0~alpha2-0ubuntu5, you can revert that change by chmod'ing /var/lib/lxc to make it readable by any user.
[16:37] <jibel> sil2100, Mirv next upgrade won't touch it
[16:37] <cyphermox> jibel: so I looked at NM again, hard, and I can't figure anythign wrong with dnsmasq... would it be possible to have the autopkgtest run again so I could see what goes on?
[16:38] <sil2100> jibel: did that
[16:38] <jibel> cyphermox, sure, but it ran again this morning and I had to kill it again. Didn't pitti found the reason?
[16:38] <sil2100> jibel: that's what I'm trying to test now :)
[16:39] <Mirv> jibel: cool, thans
[16:39] <cyphermox> jibel: oh, you did
[16:39] <sil2100> But good, then I diagnosed it correctly, thanks
[16:39] <jibel> sil2100, at least that what is written in the changelog :)
[16:39] <cyphermox> jibel: alright then
[16:40] <cyphermox> jibel: I haven't heard back from piiti about it..
[16:40] <cyphermox> jibel: was the environment build from a server image or from minimal?
[16:41] <fginther> doanac, where? Yes we saw an issue with the workspace dir getting set to 600 causing jenkins to complain of mkdir failures
[17:01] <jibel> cyphermox, it's based on a server cloud-image http://developer.ubuntu.com/packaging/html/auto-pkg-test.html#executing-the-test is you want to test locally
[17:02] <cyphermox> weee
[17:02] <cyphermox> awesome, that's what I expected.
[17:02] <cyphermox> I'll definitely give it a good look, I think I know what the issue is
[17:04] <sil2100> Mirv: did it help?
[17:04] <sil2100> I need to disconnect now, but I'll be back in like 3-4 hours
[17:04] <sil2100> Mirv: drop me an e-mail if anything ;)
[17:05] <Mirv> sil2100: yes it helped
[17:05] <Mirv> at least initially it looks so
[17:05] <sil2100> \o/
[17:10] <Mirv> ok soon many of the stacks should be greener/yellower, as I'm rerunning check jobs for a bunch of them
[17:10] <alan_g> fginther: I've seen some Mir jobs today with "E: Unable to locate package liblttng-ust0" e.g. https://jenkins.qa.ubuntu.com/job/mir-android-trusty-i386-build/73/console - can you help?
[17:10] <Mirv> build failures in some too, though
[17:13] <cyphermox> Mirv: https://code.launchpad.net/~mathieu-tl/indicator-datetime/13.10.0+13.10.20131023.2-0ubuntu1/+merge/193460
[17:16] <Mirv> +1
[17:37] <cyphermox> jibel: what hoops do I need to go through if I want to delete that offending test?
[17:37] <cyphermox> it's just too broken, not bringing much usefulness
[17:37] <cyphermox> (it's running fine on my machine with the auto testing scripts)
[17:41] <cjwatson> alan_g: mir should've been rebuilt against liblttng-ust2 recently, I thought
[17:42] <cjwatson> alan_g: looks that way in trusty anyway.  maybe transient?
[17:44] <cyphermox> cjwatson: what was the issue?
[17:52] <cjwatson> cyphermox: 17:10 <alan_g> fginther: I've seen some Mir jobs today with "E: Unable to locate package liblttng-ust0" e.g. https://jenkins.qa.ubuntu.com/job/mir-android-trusty-i386-build/73/console - can you help?
[17:52] <fginther> cjwatson, thanks for the reping
[17:52] <alan_g> cjwatson: Hmm... I guess the https://jenkins.qa.ubuntu.com/job/mir-android-trusty-i386-build/73/console runs the tools/setup-partial-armhf-chroot.sh script?
[17:53] <cyphermox> ah, yeah
[17:54] <alan_g> and that script needs to pull liblttng-ust2?
[17:55] <cyphermox> that should just be happening automatically, python-autopilot-trace depends on it
[17:55] <cjwatson> alan_g: That's beyond my knowledge, but I would've hoped it'd be following mir's dependencies rather than hardcoding a library name
[17:55] <cjwatson> cyphermox: Downloading mir dependency: liblttng-ust0
[17:55] <cyphermox> oh, mir
[17:55] <cjwatson> I think it's probably worth retrying to confirm it's gone
[17:55] <cjwatson> Which it should be ...
[17:56] <alan_g> cjwatson: that's an embarrassing script, but no-one has spent the time to fix it properly
[17:59] <cyphermox> ah
[17:59] <cyphermox> yeah, that's hard-coded there
[18:00] <cjwatson> hardcoded> yikes!
[18:03] <fginther> :q
[18:03] <fginther> alan_g, cjohnston, cyphermox, so problem solved? setup-partial-armhf-chroot.sh needs to be updated?
[18:03] <cyphermox> yeah. updated with rm if possible ;)
[18:03] <cyphermox> fginther: otherwise I'd like to fix it to get the right stuff from debian/control
[18:04] <fginther> cyphermox, I'll leave that to the mir team
[18:04] <alan_g> fginther: yeah - I'm at EOD. I've passed the problem to kdub ;)
[18:05] <cyphermox> I'll propose a merge to repair this... I don't know what they use the script for really, but if it can't be dropped then..
[19:00] <renato_> fginther, could yo help me with this MR: https://code.launchpad.net/~om26er/mediaplayer-app/imrove_autopilot_tests/+merge/184584
[19:01] <cjohnston> renato_: you should probably try the vanguard first
[19:05] <cjohnston> renato_: it looks like there is a test failure
[19:08] <fginther> renato_, the armhf node died during the build too
[19:08] <renato_> fginther, cjohnston I have approved again
[19:09] <plars> fginther, renato_: https://jenkins.qa.ubuntu.com/job/generic-mediumtests-runner-maguro/2813/console looks like unlock failed also
[19:10] <fginther> plars, the unlock worked on the second attempt
[19:10] <plars> fginther: ah, ok I see
[19:10] <plars> fginther: why is there no rebuild link on this one? do we just need to retry and post results manually if it passes?
[19:11] <Mirv> plars: fginther: are you look at this one we'd need to launch a new image build? https://code.launchpad.net/~saviq/unity8/move-setenv/+merge/193426 (the e-mail thread "Unity 8 AP tests on #9")
[19:11] <fginther> plars, it was triggered by an approved MP, so to rebuild, you need to re-approve the mP
[19:12] <plars> fginther: I have no rights to do that
[19:12] <plars> fginther: do you?
[19:12] <fginther> renato_, there are a number of test failures, it may fail again
[19:12] <fginther> plars, yes, renato_ already re-approved
[19:12] <renato_> fginther, where did you see that
[19:13] <fginther> UNSTABLE: http://jenkins.qa.ubuntu.com/job/autopilot-testrunner-otto-trusty/185
[19:13] <fginther> renato_, something bad happened here too: FAILURE: http://jenkins.qa.ubuntu.com/job/generic-mediumtests-runner-maguro/2813/console
[19:14] <renato_> fginther, let me see if something change in the code since this mr was created because the first build works nice
[19:14] <fginther> renato_, the maguro failure looks like a possible crash
[19:15] <fginther> renato_, ahh, this was on build 8. that had some known issues
[19:15] <fginther> renato_, let me see if I can find the bug
[19:16] <plars> Mirv: it seems to have restarted
[19:16] <plars> Mirv: it's running now
[19:18] <fginther> renato_, https://bugs.launchpad.net/mir/+bug/1245958
[19:22] <renato_> fginther, do you think this will work now?
[19:22] <renato_> bfiller, ^ ^
[19:23] <fginther> renato_, the mako/maguro tests should be more stable
[19:23] <bfiller> fginther: looks like the otto test failure is because correct codec not installed? https://jenkins.qa.ubuntu.com/job/autopilot-testrunner-otto-trusty/185/testReport/junit/mediaplayer_app.tests.test_player_with_video/TestPlayerWithVideo/test_show_controls_at_end_with_mouse_/
[19:25] <fginther> bfiller, renato_, if that's the cause, then we should be able to resolve it by adding a dependency to the -autopilot package
[19:25] <renato_> rsalveti, told me to no do that
[19:26] <bfiller> fginther, renato_ : I'll check with rsalveti  - seems we need this, at least for otto
[19:26] <fginther> ack
[19:27] <Mirv> plars: yeah, I just didn't do something so I'm wondering if you'd understand more of the problems it's seeing.
[19:27] <Mirv> plars: I'm not holding my breath that it'd now succeed automatically
[19:31] <plars> Mirv: it's not a failure I've personally seen before, did anything change? looks like it passed at one point in the mp
[19:33] <bfiller> renato_: rsalveti said we shouldn't be testing mp4, and you should convert that to ogg or other free format
[19:34] <renato_> bfiller, ok I will try that
[19:34] <bfiller> renato_: mp4 works on the device because it's using hardware decoding and doesn't need gstreamer-ugly, but on desktop that won't work without that pacakge. And you are right we don't want to install the -ugly
[19:35] <renato_> I am afraid that, the ogg does not work well on device (In the last time that I tried)
[19:35] <renato_> this was before the the new gst plugin
[19:37] <rsalveti> renato_: we might indeed have issues with a different format, but we need to use an open format
[19:37] <rsalveti> renato_: can you convert and post the video somewhere so we can test?
[19:37] <rsalveti> renato_: other we might indeed, temporarily, add the av/ffmpeg dependency for the test itself
[19:38] <rsalveti> *otherwise
[19:43] <renato_> rsalveti, give me a minute I will transcode the video
[19:48] <renato_> rsalveti, https://chinstrap.canonical.com/~renatofilho/videos/
[19:48] <plars> Mirv: well, it looks like it was mediumtests-trusty and qmluitests that failed last time, and they are passing now
[19:48] <renato_> could you try
[19:48] <plars> Mirv: it's not 100% complete yet, but so far so good
[20:00] <plars> Mirv: looks like it passed this time https://code.launchpad.net/~saviq/unity8/move-setenv/+merge/193426
[20:00] <plars> Saviq: ^
[20:09] <Mirv> plars: \o/ launching build
[20:09] <Mirv> great
[20:14] <renato_> fginther, what is this error: https://jenkins.qa.ubuntu.com/job/generic-mediumtests-runner-maguro/2877/console
[20:33] <fginther> renato_, looks like the device lost adb connection during the test. It's also at the same point in the test as https://jenkins.qa.ubuntu.com/job/generic-mediumtests-runner-maguro/2813/console
[20:33] <fginther> renato_, is it possible the test itself is causing the device to reset?
[20:34] <renato_> fginther, I do not know, do you want me to try again:
[20:36] <fginther> I'll try another build of just the maguro job
[20:54]  * Mirv testing unity8 with the setenv fix
[20:57] <ogra_> Mirv, plars, any reason why we dont just set that in the upstart job ?
[20:58] <plars> ogra_: maybe a question for Saviq
[20:59] <Mirv> yep, Saviq ^
[20:59] <ogra_> seems more appropriate for hacks ... also makes them more visible and saves you from changing the binary if you need to change or drop it
[20:59] <ogra_> (not that this is overly important, just noticing it)
[21:15]  * Mirv published unity8
[21:15] <Mirv> ogra_: #10 after that, or do you know what's the plan?
[21:30] <Saviq> ogra_, well, wherever they are - they shouldn't even be needed
[21:46] <Mirv> unity8 now in release pocket
[22:11] <ogra_> Mirv, i'll trigger a build in the morning
[22:11]  * ogra_ is off for the night
[23:14] <didrocks> Laney: infinity: can one of you kick a touch image build?
[23:14] <Laney> ok
[23:14] <didrocks> there is an unity8 fix that we are interested in :)
[23:14] <didrocks> thanks!
[23:14] <Laney> did the one I did yesterday work?
[23:14] <didrocks> we should finally get a better pass rate
[23:14] <didrocks> Laney: yeah, perfect! (but unity8 was crashing in the tests :p)
[23:14] <didrocks> Laney: I won't blame you though ;)
[23:14] <Laney> ok, just wondering, you get no feedback
[23:15] <Laney> going
[23:15] <didrocks> thanks!
[23:21] <popey> how long does it usually take for the build?
[23:24] <didrocks> popey: more than an hour
[23:25] <didrocks> actually ~40 minutes for the ubuntu part
[23:25] <popey> k
[23:25] <popey> will have a play in the morning hten
[23:25] <didrocks> then, I heard some 20 minutes for assembling the android one
[23:25] <didrocks> popey: yep ;)
[23:25] <didrocks> popey: still awake? getting late, isn't it?
[23:26] <popey> yeah, watching politicians argue on telly before bed ☻
[23:28] <greyback> some people count sheep