=== markusfluer1 is now known as markusfluer === JanC is now known as Guest31516 === JanC_ is now known as JanC === chihchun_afk is now known as chihchun [06:57] PR snapd#2921 closed: releasing package snapd version 2.22.6 [07:03] PR snapd#2922 opened: many: merge 2.22.6 back into master [07:36] <_prasen_> https://github.com/mectors/sensortag [07:36] <_prasen_> hi [07:36] <_prasen_> while building this snapcraft filw [07:37] <_prasen_> i get a error of property does not match the schema [07:39] <_prasen_> Issues while validating snapcraft.yaml: The 'parts/move/filesets/sensortag-in' property does not match the required schema: 'bin/sensortag-in' is not of type 'array' [08:21] PR snapd#2893 closed: tests: bail out if core snap is not installed [08:21] PR snapd#2917 closed: osutil: remove duplicate build_id from other branch [08:37] hi [08:37] _prasen_: probably outdated repo, snapcraft is changing over time [08:44] <_prasen_> how do i fix this? [08:44] <_prasen_> should it be of type string? [08:44] <_prasen_> is this a problem of vim? [08:47] _prasen_: no, this is not related to vim [08:47] _prasen_: I don't know, sorry, just waking up after long evening work [08:48] _prasen_: I'd suggest yu ask mectors to correct this repository [08:48] <_prasen_> hahahhahaha [08:48] <_prasen_> i am like starting my day [08:48] <_prasen_> though its well past noon [08:48] <_prasen_> working with yaml for first time [08:49] <_prasen_> so saw that there could be errors due to ansi indent [08:49] <_prasen_> zyaga: will do that [08:49] <_prasen_> zyga* [08:49] <_prasen_> zyga: ty _/\_ === chihchun is now known as chihchun_afk [08:51] _prasen_: good luck :) [08:54] mvo: do you think you could do a 2nd review of https://github.com/snapcore/snapd/pull/2848 [08:54] PR snapd#2848: cmd/snap-update-ns: add compare function for mount entries [08:54] mvo: it's been out for 9 days and I'd like to push forward === chihchun_afk is now known as chihchun [09:29] PR snapd#2477 closed: interfaces/builtin: first cut at repowerd interface [10:45] PR snapd#2818 closed: Allow specifying another store via commandline option for the download command === chihchun is now known as chihchun_afk === chihchun_afk is now known as chihchun === chihchun is now known as chihchun_afk [10:57] PR snapd#2847 closed: tests: enable docker test [11:01] uhm [11:01] what am i doing wrong? [11:01] https://travis-ci.org/snapcore/snapcraft/jobs/204522771 [11:04] ogra_: i know you are not the right person but, do you know who's the snapcraft/store guy here for this ^? [11:04] mvo: you too ^ [11:05] ppisati, fgimenez [11:05] fgimenez: ^ [11:05] he's the master of tests :) [11:05] ppisati: sergiusens is usually the goto person for snapraft* but let me have a look [11:05] yeah, it's more like 'it explodes while trying to download a snap' [11:06] ppisati: this looks like the store was giving a connection reset during the tests [11:06] ppisati: so a simple retry, sometimes we have these problems (store or cdn, one of those) [11:06] ppisati: I can restart the job for you if you want [11:06] PR snapd#2848 closed: cmd/snap-update-ns: add compare function for mount entries [11:07] mvo: can i do it myself? so i won't bother anyone next time [11:08] ppisati: I'm not sure, it may well be that you can't and only people with certain privs in the snapcore group can. I restarted it now [11:08] mvo: ta === chihchun_afk is now known as chihchun [11:15] jdstrand: Easy one, maybe: snapd#2855 [11:15] PR snapd#2855: interfaces/builtin: add intel realsense udev rules into camera interface [11:18] PR snapd#2836 closed: release: assume higher version of supported distros will still work [11:21] PR snapd#2857 closed: interfaces/builtin: add /boot/uboot/config.txt access to core-support [11:27] PR snapd#2817 closed: many: switch channels on refresh if needed [11:45] <_prasen_> working behind corporate proxy, while running snapcraft a part needs npm to install a sdk. to pass the proxy variables to npm we need to set proxy and https-proxy var's in .npmrc file [11:46] <_prasen_> and for npm install we have to pass the flags : --without-ssl , --insecure [11:46] <_prasen_> how do i tell snapcraft to take these flags [11:58] PR snapd#2839 closed: debian/tests: map snapd deb pockets to core snap channels for autopkgtest [12:06] ogra_, ppisati hey :) for snapcraft-related issues about tests and CI elopio is the right person to ask [12:06] thanks [12:13] PR snapd#2923 opened: cmd/snap-update-ns: add function for sorting mount entries [12:32] PR snapd#2870 closed: tests: failover test for rc.local crash === chihchun is now known as chihchun_afk [12:38] niemeyer: ack [12:39] morphis: hey, is there a slot implementation of upower-observe? [12:40] jdstrand: o/ [12:41] hey zyga :) [12:41] ppisati: looks like your test is green now [12:44] jdstrand: yes, we added that recently [12:45] morphis: right. I mean, do you have a snap that slots upower-observe [12:45] yes we do [12:46] jdstrand: but not yet in stable [12:46] snap install --candidate upower [12:46] morphis: great, thanks! [12:47] jdstrand: any specific reason you're asking for or just for reference? [12:49] morphis: I am preparing a PR to mediate socket(PF_NETLINK, ...) and saw that upower uses udev, and I need to add 'socket PF_NETLINK - NETLINK_KOBJECT_UEVENT' to its PermanentSlot policy and want to test that it is enough [12:50] ah ok [12:54] morphis: there are a few slots that you guys wrote that I've made adjustments to and will ping you in the PR [12:55] jdstrand: thanks [13:03] jdstrand: did you see that error I reported for zesty yesterday? [13:04] PR snapd#2922 closed: many: merge 2.22.6 back into master [13:04] zyga: I did. I commented in the bug [13:04] jdstrand: it's not an urgent thing but I'd like to fix it as zesty freezes and it breaks all the tests [13:04] jdstrand: thank you! [13:05] jdstrand: I'll try the kernel that jj recommended [13:27] PR snapd#2833 closed: many: allow core refresh.schedule setting [13:28] PR snapd#2810 closed: snap: use snap-confine from the core snap [13:28] PR snapd#2874 closed: kmod: added Specification for kmod security backend === chihchun_afk is now known as chihchun [13:45] PR snapd#2878 closed: data/selinux: merge SELinux policy module [13:45] 🎉 [13:46] zyga: this means that now proper snapd selinux integration can begin [13:46] since we have "official" labels for things :) [13:46] Son_Goku: :-) [13:46] Son_Goku: I'm happy to see this [13:46] Son_Goku: I'm sleepy after 2AM release last night [13:46] Son_Goku: but with the main fire out I think next week will see some releases [13:47] I'm thinking that we should regenerate the packaging from gofed [13:47] Son_Goku: though I'm on holidays since Tue [13:47] Son_Goku: all of it? [13:47] Son_Goku: for snapd or for deps? [13:47] snapd [13:47] there's been a lot of churn and I don't know what things are anymore :( [13:47] Son_Goku: we can, I think that's ok [13:48] now that the SELinux policy is merged into the repo, I can start rebasing my packaging on that [13:48] :D [13:49] zyga: do we have a file that declares what our deps are in snapd? [13:49] I'm not completely familiar with how this works in golang [13:49] Son_Goku: yes, it's called... [13:49] Son_Goku: vendor/vendor.json [13:50] ah, so it's in the vendor directory [14:18] jdstrand: I tried to push a php-based snap to the store and got this: [14:18] found errors in file output: unusual mode 'rwx-wx-wt' for entry './var/lib/php/sessions' [14:19] I didn't do anything special, and php is installed via stage-packages [14:21] mhall119: what is the name of the snap? [14:24] laravel-mhall119 [14:25] mhall119: can you request manual review in the store? [14:25] I can, but I also want to make sure this doesn't happen for every php-based snap if we can avoid it [14:25] mhall119: I understand. I will fix the review tools for that [14:26] rwx-wx-wt is unusual, but it doesn't hurt anything [14:27] jdstrand: requested [14:44] fgimenez: can you help me with some snapcraft tests? [14:46] ppisati, i can try but probably better to ask elopio, i'm not actively working on them [14:47] fgimenez: k [14:47] elopio: ^ [14:53] PR snapd#2924 opened: interfaces: specs for apparmor, seccomp, udev [14:54] PR snapd#2891 closed: interfaces/udev: added spec [14:55] PR snapd#2890 closed: interfaces/apparmor: add spec [15:00] PR snapd#2883 closed: seccomp: added Specification for seccomp backend [15:01] hey kyrofa :) i'm trying to do some tests on the nextcloud image, have a minute for some questions? [15:02] jdstrand: do you need a bug report for that unusual file permission? [15:07] jdstrand: can I add the snippet that will make the zesty regression no break all the PRs for snapd? [15:07] jdstrand: (along with a note and a bug reference) [15:07] jdstrand: note that this is just for jailmode on classic snaps so pretty isolated === gaughen_ is now known as gaughen === mup_ is now known as mup === mup_ is now known as mup [15:46] sergiusens: hey [15:46] have you made any progress on the repo refactor thing? === mup_ is now known as mup [15:54] fgimenez, ah, sorry, responded to your internal ping first, heh [15:54] kyrofa, np! thanks :) === mup_ is now known as mup [16:00] ppisati: hey! [16:01] ppisati: so https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1652504?comments=all is apparently hitting one of the demos for MWC [16:01] Bug #1652504: Recent updates broke Ubuntu on Raspberry Pi 3 [16:01] ppisati: I'm not sure they need video output (checking with them), but they've just passed this bug to me [16:04] ppisati: ok, it's not critical, they're just giving us a heads up [16:04] PR snapd#2925 opened: [WIP] interfaces: migrate existing intarfaces to use new kmod and seccomp spec === mup_ is now known as mup [16:10] lool: i thought ryan fixed that already [16:11] ppisati: do yo know where the fix is? [16:11] lool: actually, it doesn't break video, your board won't boot at all [16:11] lool: wait wait [16:11] lool: that bug, was because ryan's image didn't have the correct address for the dtb [16:11] ok [16:12] lool: if you are hitting that bug, your board won't boot at all [16:12] lool: but you mention a problem with video output [16:12] ppisati: that's what they said [16:12] I'm proxying here [16:13] lool: ok, if you want to loop me in any conversation [16:13] ppisati: so the workaround/fix is to correct DTB addr and we'll land that soon and can be applied manually in the mean time? [16:13] lool: i can test if that bug was fixed in the meantme [16:13] ppisati: we're on their slack [16:13] I've invited folks here [16:13] lool: let me reinstall the image, so i can check what's the status [16:14] ppisati: thanks man [16:23] PR snapcraft#1158 closed: packaging: snapcraft as a snap === mup_ is now known as mup === mup_ is now known as mup [16:29] @ppisati hi there, @lool passed on the word that we had trouble with video after updating our ryan's pi3 image to kernel 4.4.0-1042-raspi2 [16:29] mdye: No such command! [16:30] bad slack habits :) [16:31] mdye: Hey! [16:31] lool: ahoy! [16:31] mdye: fb or mir? [16:32] ppisati: ^ mdye is setting up a jetson tx1 + pi3 boards demo for MWC; he's also a really awesome guy :-) [16:32] ha, thx [16:33] ppisati: so the jist is we build a pi3 image from ryan's base in a chroot; I use FLASH_KERNEL_SKIP=1 to get kernel updates installed successfully in the chroot [16:33] mdye: ok [16:33] the config.txt on the updated pi3 has device_tree_address=0x100 and device_tree_end=0x8000 (which I think are the original addresses from earlier images) [16:35] the result is a system that will boot the kernel, but the kernel command line "console=tty1" isn't applied such that the console never outputs messages after uboot hands control over to the kernel [16:37] mdye: uhm, that is weird [16:37] mdye: is you did a dist-upgrade, you should get a new firmware too [16:37] *if [16:37] mdye: that would move the dtb address around, and your board wouldn't boot then [16:37] mdye: i'm reinstalling an image to check, hold on [16:38] ok, thx === mup_ is now known as mup [16:43] PR snapd#2861 closed: [WIP] interface hooks: expose attrs to the interface API, snapctl enhancements (step #4) [16:43] PR snapd#2896 closed: httputil: copy some headers over redirects [16:43] PR snapd#2925 closed: [WIP] interfaces: migrate existing interfaces to use new kmod and seccomp spec === mup_ is now known as mup [16:55] Bug #1667359 opened: After a reboot store was not accessible [16:57] PR snapd#2923 closed: cmd/snap-update-ns: add function for sorting mount entries [16:59] jdstrand: hey === mup_ is now known as mup [17:04] Bug #1667385 opened: devmode flag disappears after disabling+re-enabling a snap [17:05] jdstrand: can you please have a look at https://github.com/snapcore/snapd/pull/2827 [17:05] PR snapd#2827: cmd: add helpers for mounting / unmounting [17:06] jdstrand: I think it is good to land now [17:06] jdstrand: and I waaaaant it to land :) === mup_ is now known as mup [17:08] jdstrand: the new patches are: https://github.com/snapcore/snapd/pull/2827/commits/6573f698a22db592006cf6af7d5284cf66a891e4 and https://github.com/snapcore/snapd/pull/2827/commits/9e24bee9f0cb94aef73c23667fd80364db66d3bb [17:08] PR snapd#2827: cmd: add helpers for mounting / unmounting === mup_ is now known as mup === mup_ is now known as mup [17:14] PR snapd#2872 closed: tests: do not use core for "All snaps up to date" check [17:14] Hi - question about ubuntu Core. I have a working machine in KVM, and when I want to ssh into it, it asks key passphrase. When I enter it correctly, it asks users password, which I don't know... any ideas? [17:14] slunatecqo, you uploaded your SSH public key to your SSO account, etc.? [17:15] kyrofa: yes - the key is set up - after I enter its passphrase it asks for user password [17:15] slunatecqo: is the 'it' ssh? ssh -vvv may help see if it rejected your key, if so [17:15] slunatecqo, which implies to me the key you're using doesn't match up to a key authorized on the device [17:16] slunatecqo, indeed, try nacc's advice [17:17] zyga: I'll add it to the list. not sure it will be today, but will be able to first thing next week (off tomorrow) [17:17] nacc: yeah... thats it.. the passphrase should be blank, but for some reason ssh says "debug2: no passphrase given, try next key" [17:21] jdstrand: I'm off next week [17:21] jdstrand: if you can I'd appreciate it, the diff is tiny [17:21] * zyga back to other things [17:21] slunatecqo: it means that it tried your ssh key (which was encrypted) but they it tried password auth [17:22] slunatecqo: the key associated with your account is not the key you've used [17:22] zyga: yeah. I understand.... thanks === chihchun is now known as chihchun_afk [17:29] PR snapd#2873 closed: tests: several improvements to the nested suite [17:30] PR snapd#2926 opened: cmd/snap-update-ns: move test data and helpers to new module [17:31] Ok - one more. I just installed docker in fresh UbuntuCore. running any container gives me "container command XXXX could not be invoked" [17:32] slunatecqo: not sure about that, what is your environment? [17:33] environment? [17:33] I am running ubuntu-core-16-amd64.img using qemu-KVM [17:33] mmm [17:33] slunatecqo: can you please report that [17:33] slunatecqo: I'm sure people will want to look at it and fix it [17:35] where? launchpad bugs? [17:35] slunatecqo: yes please, on launchpad.net/snappy [17:41] Bug #1667408 opened: docker container command XXXX could not be invoked [17:41] slunatecqo, lool might have some thoughts if he's around [17:42] kyrofa: is it ok to ask directly him? on some IRCs it is not... [17:42] slunatecqo: which docker is this? [17:42] slunatecqo: 1.13? [17:42] slunatecqo, I sorta did ;) [17:42] slunatecqo: 1.13 is known broken unfortunately [17:42] lool: Docker version 1.11.2, build v1.11.2-snap-38fd0d3 [17:43] slunatecqo, not the best to directly ping people for generic questions, but in this case I know lool is The Docker Guy [17:43] kyrofa: thank you :-) [17:43] slunatecqo: are you on classic or on core? [17:44] KVM image from here https://developer.ubuntu.com/core/get-started [17:44] lool: so I guess core :D [17:44] Yes [17:46] slunatecqo: how are you installing and running? [17:47] slunatecqo: my whole knowledge is recap-ed in https://docs.google.com/document/d/1JHa6tkuR9PtpnAVVmAJIAKuyKBy8E9ZkvG5Wbc6HZSY/edit#heading=h.j2sflymhgsj8 [17:47] which might be slightly out of date [17:47] it's also possible that some regressions occurred [17:47] https://bugs.launchpad.net/snappy/+bug/1667408 here is the bug reported [17:47] Bug #1667408: docker container command XXXX could not be invoked [17:47] basically just snap install docker [17:48] ah - that will be it... the image is armhf? [17:49] no... doesnt solve the problem.. [17:53] lool: have to leave now for few minutes. Could you write ideas to the bug on launchpad? [18:09] slunatecqo: I suggest you open a bug against github.com/docker/docker and link it back on Launchpad [18:09] slunatecqo: docker is directly publishing the snap [18:10] slunatecqo: I wont have time to debug this this week and am travelling the next 2.5 weeks [18:10] is there any way to determine when the next refresh check is scheduled? [18:11] slunatecqo: I dont have a clean Core environment handy, but I did give this a try on classic and it worked there: http://paste.ubuntu.com/24054379/ [18:11] slunatecqo: so it's probably specific to core [18:31] PR snapd#2927 opened: cmd: add .indent.pro file to the tree [18:56] pmcgowan: hey, there are some controls for that coming [18:56] pmcgowan: it's mostly implemented [18:56] pmcgowan: but not released yet [18:56] Bug #1606674 changed: Unable to drive Adruino over USB from Arduino IDE snap [18:56] pmcgowan: you can set schedule in a very detailed way [18:56] zyga,ok, it seems one of my systems runnign xenial isnt doing refreshes [18:57] zyga, wondering how to debug [18:57] pmcgowan: what's the version of snapd? [18:57] its got core 16.04.1 [18:57] 2.22.3 snapd [18:57] jdstrand, raw-usb doesn't seem to cover /dev/ttyUSB*, right? [18:57] pmcgowan: what does "snap info core" say? [18:58] zyga, http://pastebin.ubuntu.com/24054592/ [18:58] refresh --list shows 8 snaps [18:59] wow [18:59] don't refresh yet please [18:59] one sec [19:00] pmcgowan: can you pastebin snap changes [19:00] zyga, http://pastebin.ubuntu.com/24054598/ [19:00] not very interesting [19:01] kyrofa: correct. it allows access to /dev/bus/usb/*, not /dev/.... /dev/ttyUSB* is covered by the serial-port interface [19:01] pmcgowan: can you run journalctl -ux snapd [19:01] anything broken? [19:01] jdstrand, which again is gadget only [19:01] today, yes [19:01] jdstrand, talking about bug #1606674 here [19:01] Bug #1606674: Unable to drive Adruino over USB from Arduino IDE snap [19:02] kyrofa: we need hotplug [19:02] zyga, yes. But I don't see that on the horizon [19:02] zyga, http://paste.ubuntu.com/24054608/ [19:02] And this has been an issue from the beginning [19:03] zyga, assume you meant -u [19:03] zyga, we need something to unblock [19:03] kyrofa: I think it is on the horizon. it was discussed as important in The Hague. best to ask JamieBennett on the priority [19:04] pmcgowan: -ux is for failures but this is good [19:04] zyga, ah, yes no matches [19:04] kyrofa: niemeyer said in The Hague that he would design hotplug and then present it for review. I don't know where that fell in the prioritization discussions after [19:04] pmcgowan: systemctl status snapd.refresh.timer [19:05] kyrofa: I think we need to work on hotplug and not to stopgaps [19:05] kyrofa: also, this should work in devmode. https://bugs.launchpad.net/snapd/+bug/1606674/comments/2 [19:05] Bug #1606674: Unable to drive Adruino over USB from Arduino IDE snap [19:05] zyga, I did have it off at some point but its been on a wile now http://pastebin.ubuntu.com/24054611/ [19:05] pmcgowan: how about snap list [19:06] fwiw, I agree with zyga on focusing on hotplug instead of stop gaps cause that will unblock a lot [19:06] jdstrand, devmode is not the answer when you're done developing and want to ship something [19:06] zyga, http://paste.ubuntu.com/24054612/ [19:06] kyrofa: of course, but the bug talks about it not working in devmode [19:06] pmcgowan: ok, can you, just for debugging, save /var/lib/snapd/state.json somewhere [19:06] pmcgowan: and then sudo snap refresh [19:07] sure [19:07] and hold your fingers croessed [19:07] thanks! [19:07] maybe you will uncover why this happens [19:07] zyga, I think that works, but then it doesnt update again for days [19:07] jdstrand, indeed, though it was clarified in the comments that was a normal permission issue [19:07] kyrofa: I suggest escalating this via the snappy team's stakeholder process [19:07] kyrofa: yes, +1 on that process [19:07] kyrofa: yes, that is the comment I gave the url to :) [19:07] kyrofa: it's rally the best thing we can do [19:08] kyrofa: JamieBennett holds a weekly stakeholder meeting iirc. he can help you participate in the process [19:08] zyga, yeah its getting everything [19:09] pmcgowan: including core? [19:09] yes [19:09] (let's see what we get) [19:09] btw why did the versioning go from 16.04.1 to 16-2 [19:09] pmcgowan: $reasons, we want something we cannot yet get so this is a stub [19:10] pmcgowan: we'll get 16-$snapd_version [19:10] pmcgowan: when snapcraft can [19:10] ah [19:17] pmcgowan: Heya [19:18] niemeyer, hey [19:18] pmcgowan: So what happens when you type "snap refresh core"? [19:18] Sorry if I missed that in the backlog [19:18] niemeyer: we have a copy of the old state [19:18] niemeyer: for forensics [19:18] niemeyer, snap refresh got everything (except devmodes) [19:18] pmcgowan: I'd appreciate if you send that to us in private [19:19] niemeyer, it always manually works [19:19] zyga, can email to you [19:19] pmcgowan: What happens when you run, more specifically? [19:19] pmcgowan: "snap refresh core" [19:19] niemeyer, it already refreshed but quite sure it would work [19:19] pmcgowan: please [19:20] do you think it is worth aborting and checking just core [19:20] pmcgowan: So manually it works, but it wasn't running automatically? [19:20] correct [19:20] niemeyer: the timer was set and enabled [19:21] niemeyer: but last ran ... 21 days ago [19:21] at some point I had disabled the timer [19:21] but it was on since a week I think [19:21] pmcgowan: but it was meant to run in two hours based on your pastebin above [19:21] is it possible that the timer no longer works for whatever reason [19:21] (e.g. runs as real root) [19:23] The systemd timer is no longer respected.. [19:23] ?! [19:23] It's internal now.. [19:23] in 2.21.3? [19:23] hmmm [19:23] Or am I mixing that with code that is yet to come? [19:23] * niemeyer looks [19:23] niemeyer: I think we live on the ege, let's see what was on 2.21.3 [19:24] zyga, I had 2.22.3 fwiw [19:25] right, sorry [19:25] PR snapd#2920 closed: wrappers/services: RemainAfterExit=yes for oneshot daemons w/ stop cmds [19:25] 22 is the only version with further point releases [19:27] zyga: Nevermind.. the logic I'm talking about hasn't landed yet [19:27] pmcgowan: What do you have on "snap changes" by now? [19:28] niemeyer: omg [19:28] niemeyer: it's not doing refreshes in that version [19:28] just the timer can do that [19:28] pmcgowan: Also, please: "systemctl cat snapd.refresh.service" and "systemctl cat snapd.refresh.timer" [19:29] zyga: Not sure I understand what you mean [19:29] zyga: Yes, just the timer can do that, and it's the timer that does that.. ? [19:29] niemeyer, http://paste.ubuntu.com/24054719/ [19:30] niemeyer: sorry, I misread your earlier statement, I thought we managed to release a version that doesn't refresh internally and ignores the timer [19:30] pmcgowan: snap version [19:30] pmcgowan: is it 2.22.6? [19:30] Nope [19:30] We actually never released a version that ignores the timer [19:31] zyga, yes [19:31] This is up for review and pending high-level conversations [19:31] niemeyer, zyga lets see if it works now, I don't want to waste your time [19:32] zyga, niemeyer earlier today I saw that the system time was storing local to the rtc, and I suspected that messed up the timer [19:32] since the local time was set after snapd ran [19:32] but I fixed that, so maybe I then just didnt wait long enough [19:32] pmcgowan: what's the delta between local and utc? [19:32] 5 hours [19:32] ha [19:32] pmcgowan: journalctl -u snapd.refresh.service [19:32] so you only had 1/6 of chance to update [19:33] niemeyer, no entries [19:33] or am I reading it wrong? [19:33] pmcgowan: That means the timer has never ever run [19:34] hmm [19:34] Well, sorry, that's actually not true.. your system might not be persisting the logs [19:35] niemeyer, maybe I somehow have the service disabled [19:35] pmcgowan: systemctl status snapd.refresh.timer? [19:35] niemeyer: default journal is not going across boots I think [19:36] Yeah, it requires a mkdir [19:36] right [19:36] http://pastebin.ubuntu.com/24054758/ [19:37] is the service just not started? [19:37] pmcgowan: the service is just a oneshot AFAIR [19:37] pmcgowan: runs snap refresh [19:38] well maybe it was the time thing, in which case it should start working [19:38] niemeyer: set your system to local time (like windows) [19:38] niemeyer: and see what you get [19:39] niemeyer: just let it sit for 24 hours [19:39] niemeyer: I think you're almost as far from UTC, right? [19:39] niemeyer is on my timezone, I think [19:39] UTC-5? [19:39] pmcgowan: That log says the timer was started 3h ago [19:39] timedatectl set-local-time true (AFAIR) [19:40] niemeyer, yes [19:40] hmmm [19:40] although it spits out two lines of adding hh:mm:ss [19:40] pmcgowan: And there are logs [19:40] pmcgowan: Can you please try this again: [19:40] pmcgowan: journalctl -u snapd.refresh.timer [19:41] niemeyer: recall that in http://pastebin.ubuntu.com/24054592/ we saw the core snap was last refreshed on 2017-02-02 13:25:08 -0500 EST [19:41] http://pastebin.ubuntu.com/24054781/ [19:41] zyga: That's irrelevant.. we don't fiddle with the systemctl timer I don't think [19:42] niemeyer: we don't fiddle with it no, but if it ran 3 hours ago then... what happened/ [19:42] why does it say adding 5h then adding 4h [19:42] pmcgowan: So how come it had no entries and now it does? [19:42] pmcgowan: that's a systemd randomization thing [19:42] pmcgowan: Did you run "systemctl restart snapd.refresh.timer" by any chance? [19:42] pmcgowan: did you reboot in the last three hours? [19:42] niemeyer, the service had no entries [19:43] @pmcgowan Ah, sorry, gotcha [19:43] niemeyer: No such command! [19:43] pmcgowan: Did you reboot your system ~3h ago? [19:44] zyga, up 3:13 [19:44] yes [19:44] so it could have ran earlier [19:44] but we don't have logs [19:44] but I think syslog is preserved [19:44] maybe there is a trace in syslog beore the boot [19:44] can you check at around that time? [19:45] pmcgowan: Okay.. my theory so far is that the timre was indeed not enabled, but it's hard to validate it [19:45] pmcgowan: Can you please enable the logs persistently by doing "mkdir /var/log/journal" [19:45] niemeyer, sure [19:46] zyga, what should I look for? [19:46] pmcgowan: and systemctl restart systemd-journald [19:46] pmcgowan: for the service name maybe [19:46] pmcgowan: not sure how it is logged [19:46] pmcgowan: ideally for a trace that it ran and maybe for the output === lamont` is now known as lamont [19:47] Uh oh, wait [19:48] pmcgowan: grep 'snap.*refresh' /var/log/syslog | pastebinit [19:48] zyga, there is a failure in there http://pastebin.ubuntu.com/24054813/ [19:49] niemeyer, http://paste.ubuntu.com/24054819/ [19:49] hmmm [19:49] niemeyer: theory: related to exit code of refresh [19:49] niemeyer: maybe we refresh once, nothing to do, we return an error and systemd skips running this? [19:50] DNS error [19:50] hmmm [19:51] zyga: I was wrong.. the snapd I have from edge is ignoring refreshes from the timer.. I'm about to figure when that started [19:51] Commit was on Feb 1st [19:52] niemeyer: 2.22.3 was on Date: Fri Feb 17 21:04:27 2017 +0100 [19:52] (tagged) [19:52] hey, i am trying to upload a snap to myapps.dev via jenkins, is there a way to use snapcraft login --username xxx --password xxx? [19:53] zyga: Nope, wasn't disable on 2.22.3 [19:53] EEight_: no but you can copy the auth data [19:53] wasn't disabled [19:53] niemeyer: look up in history [19:54] zyga: can you give me more information (auth data)? [19:54] niemeyer: mvo probably commited it on Feb 1st but it landed later [19:54] EEight_: look at .snap/auth.json [19:54] zyga: Was never released [19:55] zyga: It's in master only [19:55] Thus edge [19:55] pmcgowan: Were you on edge by any chance? [19:55] I see [19:55] niemeyer: it was tracking stable [19:56] niemeyer: there's a pastebin at the start of the converstation, let me pull it up [19:56] http://pastebin.ubuntu.com/24054592/ [19:56] zyga, where to find this file (no .snap in my home) [19:57] EEight_: snap login [19:57] EEight_: the it will be there [19:57] (sudo snap login) [19:58] zyga, excellent got it, then how to pass that to snapcraft login for uploading my snap on myapp.devs... [19:59] kyrofa: ^^^ [19:59] pmcgowan: That syslog is a bit suspect [19:59] pmcgowan: How come the time goes back and forth and back and forth [19:59] niemeyer: wow [19:59] niemeyer: maybe ntp is not aware of local time [20:00] A crazy clock would be a great reason for timers not to work :) [20:00] niemeyer: and kicls in [20:00] niemeyer: and then ... some other component does the same [20:00] niemeyer, thats the local time screwup [20:00] pmcgowan: is this a VM? [20:00] no [20:00] pmcgowan: Ok, but it's not just a screw up [20:00] hmmm [20:00] pmcgowan: It's going back and forth multiple times [20:00] I think once each reboot [20:00] or do you see otherwise [20:00] niemeyer: now that you menitoned it that clock is everything but monotonic [20:01] pmcgowan: I don't have reboot information there.. I just see that on Feb 23 alone it went 10, 15, 10, 16, 11, ... [20:01] what I thought it was doing was booting to utc, then reseting to local time one the network was checked [20:02] niemeyer, thats each boot, and the last boot I had local turned off [20:02] thinking that was screwing with the refresh timer [20:02] pmcgowan: OKay, that may well be the case.. can you dig into an older syslog file with that same grep line [20:03] pmcgowan local time is stored in the hardware clock [20:03] pmcgowan: tha's what the systemd knob controls AFAIR (for compat with windows) [20:03] pmcgowan: syslog.1 or 2.gz [20:03] pmcgowan: do you dual boot? [20:03] niemeyer, sure which grep again? [20:03] pmcgowan: Well, not really.. the hardware clock stores *a* time.. it's the O.S. that gives it meaning, and that's configurable [20:04] Sorry, that was for zyga [20:04] pmcgowan: Let me copy, just a sec [20:04] pmcgowan: grep 'snap.*refresh' /var/log/syslog.1 | pastebinit [20:04] niemeyer: yes, that's correct [20:05] niemeyer: I meant that the knob on systemd instructs it to store the local time into the clock [20:05] niemeyer: vs UTC as is done by default [20:05] EEight_, I'm afraid zyga is incorrect [20:05] EEight_, snapcraft login isn't the same as snap login [20:06] EEight_, but it's similar. Running `snapcraft login` saves a macaroon in .config/snapcraft/snapcraft.cfg [20:06] kyrofa: ah, too bad [20:06] niemeyer, no hits [20:06] on .1 or .2 [20:06] EEight_, you can encrypt that file and use it in CI, though I recommend you create a store account for your bot [20:06] (rather than giving it your macaroon) === rumble is now known as grumble [20:07] pmcgowan: Any hits on any of the other files? [20:08] EEight_, note that snapcraft has an `enable-ci` command [20:08] kyrofa, snapcraft login, not possible to pass the username and password directly on the command line? [20:08] Which does this for you [20:08] But it only supports travis at the moment [20:08] You might investigate adding support for jenkins [20:09] niemeyer, http://paste.ubuntu.com/24054941/ [20:09] EEight_, I'm afraid not [20:10] kyrofa, ok, so the solution is to encrypt the macaroon and pass that to snapcraft? [20:11] EEight_, in CI, you'll need to un-encrypt that file and place it in ~/.config/snapcraft/snapcraft.cfg [20:11] EEight_, at that point, snapcraft will be "logged in" if you will [20:12] EEight_, note that encryption is not required here, but I assume you'll be storing it in a source tree somewhere, in which case note that macaroon is essentially a password [20:13] EEight_, i.e. don't share it in the clear [20:14] hi guys, do you know if /usr/bin/lsb_release was removed from the core snap? [20:16] ahasenack: not sure but I'd recommend to use /etc/os-release instead [20:16] zyga: the tool is a convenient way to avoid having to parse the file (but parse the tool's output) [20:17] kyrofa, ping [20:18] ahasenack: the file is easier to parse and you don't have to run a separate process [20:18] kyrofa, quick question, where do you see the configure hooks logs? [20:18] zyga: still, if it was removed in an update, sounds like an important change [20:18] zyga, niemeyer it just did a refresh check [20:21] alex-abreu, you mean stdout/stderr? They belong to the task as I recall, which means you don't see them unless the hook fails [20:22] alex-abreu, note that you can run the hook directly with `snap run --hook` [20:22] kyrofa, yes [20:22] If you're just talking about developing [20:25] kyrofa, the hook then runs in the same context as the one defined when running as part of snap-confine? [20:25] pmcgowan: Nice. I really don't know what to make from it.. the complete silence on the logs for several days hints at a manually disabled timer, which I think you said had happened, right? [20:25] alex-abreu, indeed [20:26] pmcgowan: That plus the crazy clock makes me feel like the environment is a bit unhealthy [20:26] pmcgowan: In either case, we're changing that timer mechanism and making it internal [20:26] pmcgowan: So one way or another, the problem will be fixed [20:26] niemeyer, I can except it was my system setup [20:26] I do suspect the rtc clock setting, I could try later to put it back and see what happens [20:27] pmcgowan: I'll remember to review the timer logic and consider what would happen in such skews [20:27] pmcgowan: The new logic, that is [20:27] great thanks for the help [20:27] kyrofa, mmmh ... a hook has access to SNAP_COMMON right? [20:27] alex-abreu, it should, yes [20:28] alex-abreu, though note that things in there are typically owned by root if you're running into permissions issues and aren't running `snap run` via sudo [20:32] ahasenack: we don't guarantee it will be present there [20:32] zyga: ok, so after some debugging... We have a snap that calls /usr/bin/lsb_release [20:32] zyga: on existing systems that upgraded to the latest core snap, our snap keeps working somehow [20:33] zyga: but if these are rebooted, then our snap doesn't find /usr/bin/lsb_release anymore [20:33] ahasenack: let me look [20:33] zyga: https://bugs.launchpad.net/snappy/+bug/1619420 is about the lsb_release removal [20:33] Bug #1619420: snappy removal of dpkg-query breaks lsb_release --all [20:33] ahasenack: on core systems? [20:33] it was indeed removed [20:33] zyga: no, ubuntu [20:33] ahasenack: maybe it was removed on ubuntu [20:33] zyga: /usr/bin/lsb_release was removed from the core snap [20:33] ahasenack: yeah, I just checked [20:33] see comment #11 and #14 in that bug [20:34] zyga: ok, that broke our snap, we will fix it, but the question I have now [20:34] zyga: is why upgraded systems kept working [20:34] are they seeing the old core snap? [20:34] ahasenack: no idea [20:34] ahasenack: maybe [20:34] ahasenack: snap info core [20:34] ahasenack: if you have such a system around that would be good [20:35] we just rebooted it [20:35] I think I have a snap list --all [20:36] PR snapd#2924 closed: interfaces: specs for apparmor, seccomp, udev [20:37] ahasenack: I'll gladly help you figure out what's going on with the core snap but I think the fate of lsb_releaes is sealed [20:37] it's a dead thing [20:37] it's ok [20:38] what I wanted to understand now is the dynamics of core updates, what happens to existing snaps when the core one is updated [20:38] what do they see [20:38] ahasenack: nothing [20:38] it *looks* like they saw the old core filesystem, or perhaps a mix [20:38] ahasenack: they see the old core till the machine reboots [20:38] or maybe it was just a case of a dangling fd [20:38] ah [20:38] ahasenack: unless the app is not running [20:39] ahasenack: we persist the mount namespace across app runs [20:39] zyga: if the app snap is restarted, it still sees the old core? [20:39] ahasenack: as long as it was not removed [20:39] ahasenack: yes [20:39] ahasenack: we keep three revisions of core around [20:39] zyga: ok, and if the app snap is updated? [20:39] it also keeps seeing the old core? [20:41] ahasenack: yes [20:41] ahasenack: that's a bug I'm fixing for the past month [20:41] ahasenack: or more now [20:41] ahasenack: that won't change soon actually but we plan to change the mount namespace the app exists in [20:41] zyga: ok, is there a case where the app snap would see the new core, outside of a reboot? [20:41] it would have to be removed and reinstalled? [20:42] ahasenack: technically when the app changes it will see its new self on top of the core it ran with when you stated it for the first time since last boot [20:42] ahasenack: not at present [20:42] zyga: what if core got updated, say, 5x? [20:42] I have core v1, app snap v1 [20:43] then core v2 removed lsb_release, app still sees it because core v1 is still there [20:43] and so on, but you said we only keep 3 revisions around [20:43] ahasenack: yes [20:44] ahasenack: once the rootfs is removed strange things will happen [20:44] zyga: at some point I will have core v3, v4, v5 (v5 being current) [20:44] none with lsb_release (continuing on this example) [20:44] then my app snap would fail outside of the reboot scenario? [20:44] ahasenack: I'm working on snap-update-ns that goes into the snap's mount namespace and does updates [20:44] ahasenack: I'm not entirely sure but I suspect so [20:45] ok [20:45] that's fine, I'm just trying to gather the failure scenarios for our bug [20:45] where we are working on parsing /etc/os-release instead of relying on the presence of /usr/bin/lsb_release [20:45] ahasenack: which language are you using? [20:45] ahasenack: there are libraries for this for anything out there [20:46] go [20:47] we want to be sure we are on xenial [20:47] never thought lsb_release would be gone, because, well, lsb stands for linux standards base :) [20:49] ahasenack: lsb is as dead now as it was before [20:50] hehe :) [20:58] ahasenack: thank you [20:58] ahasenack: you made me realize we have a nasty bug [20:58] ahasenack: anyway [20:59] "good" :) [20:59] ahasenack: the core is okay, unless I can do what I think I did before when removal of the squashfs file made crazy things happen [20:59] maybe it was a kernel bug though [20:59] but what is buggy at present is that if your app was running [20:59] and you update [20:59] and even restart the app [20:59] it will see the old core snap [20:59] I'll fix that ASAP [21:01] https://bugs.launchpad.net/snapd/+bug/1667479 [21:01] Bug #1667479: mount namespace is not discarded when core snap changes revision [21:02] jdstrand: FIY, small omission /o\ [21:05] kyrofa, we dont seem to run in a proper snap context when running snap run --hook ... my snapctl call fail [21:05] alex-abreu: yes, known issue [21:06] zyga, ah do you have a bug # ? [21:06] alex-abreu: the context you get when your hook runs for real is special (kind of a transaction) [21:06] alex-abreu: no, pawel was handling that, I don't recall [21:06] alex-abreu: there's an unfinished branch that adds a generic context for all the run cases so that you can always use snapctl [21:06] alex-abreu: but still the run in a real (internal) run is special as we abort the transaction if the hook fails [21:07] alex-abreu: let me look if there's a bug for this [21:07] alex-abreu: I don't see one [21:09] thank you [21:10] alex-abreu: please report it, I'll make sure pawel knows [21:10] alex-abreu: sorry for the inconvenience [21:10] zyga, np, thank you for the heads up on that, ... [21:18] alex-abreu, ah, right, snapctl can't be called without snapd generating the hook context variable [21:18] alex-abreu, as zyga mentioned, pawel is working on making snapctl usable from apps as well, which will allow it to be used from snap run as well [21:19] kyrofa, yes, ... is there still a plan to expand snapctl to have systemctl like caps for like the current snap owned daemons etc.? [21:19] alex-abreu, snapd generates a cryptographic token for each hook run that ties it to a specific snap (i.e. so you can run `snapctl get` without also supplying the snap name) [21:20] alex-abreu, that's also probably pawel's domain these days [21:20] kyrofa, yes I saw that and tried to follow what snapd was doing around that [21:20] ah so pawel is the person to bother then [21:27] kyrofa: pawel is a bit pulled around but I think he wants to go back there [21:51] do i need to ask to have a track created? [21:53] also how do i set my 2.1/stable as the default track users get when running snap install conjure-up --classic [21:59] stokachu, yes, I believe you need to ask [22:00] stokachu, who? I'm not quite sure [22:30] kyrofa, cool thanks [22:38] PR snapd#2927 closed: cmd: add .indent.pro file to the tree [22:38] PR snapcraft#1161 opened: channels: Add track support to commands [22:39] Gah, I can't seem to get spread to respect my kill timeout. It's defaulting to 15 minutes, which isn't long enough for this test to run, and ignoring my configured value. [22:40] I've got "kill-timeout: 60m" in tests/main/gccgo/task.yaml (in snapd), but it's having no effect. [22:41] zyga, any ideas on that? ^^ [22:42] I probably just need to set it at a project level instead of task level, if I can find the right place for it. [22:45] PR snapd#2880 closed: tests: empty init (systemd) failover test [23:00] PR snapd#2928 opened: tests: 2 workers on 14.04 and core 64, drop fixme system [23:33] 'error: snap "whatever" requires classic or confinement override' is a rather ugly and uninformative error message