=== chihchun_afk is now known as chihchun === chihchun is now known as chihchun_afk === chihchun_afk is now known as chihchun [07:19] o/ [07:20] morning zyga ! [07:34] mvo: hey, no luck with lock issue yet; what makes me mad is that it happens in real time, every minute there are several reports [07:35] mvo: it might be one machine doing CI but we just don't know [07:35] mvo: I will try to inject some possible failures to see if I can replicate the error (yesterday all my attempts failed to reproduce it) === chihchun is now known as chihchun_afk === koza|away is now known as koza === chihchun_afk is now known as chihchun === JamieBen_ is now known as JamieBennett [08:36] zyga: I've seen the lock issue multiple times too on my system but lost that state already a while back, will tell you if I see it again [08:37] morphis__: during development or just use of snapd? [08:37] use of snapd [08:37] but it was a mixture of me doing multiple things, don't really remember [09:13] morphis: do you have it in snap changes? [09:13] morphis: any theory would be very valueble today [09:13] zyga, hey, anything left for https://github.com/snapcore/snapd/pull/3353 ? [09:13] PR snapd#3353: Add support for reboot parameter [09:13] zyga: no, I've cleaned my system after that some time ago [09:13] zyga: sorry [09:14] abeato: no, let me approve it, sorry [09:15] mvo: (let's chat here) [09:15] mvo: interesting observation, the zesty kernel is now at 4.10.0-21 but we see errors with just -19 [09:15] mvo: but snapd is 2.25 [09:15] mvo: this feels like a fresh install but no reboot [09:19] zyga, thanks... there is an issue with the tests but seems to be the CI infra [09:19] abeato: done [09:20] great! [09:24] zyga: yeah [09:25] mvo: question [09:26] mvo: is there any way to update with apt/dpkg [09:26] gah [09:26] why do we turn a 504 into a 400 :-( [09:26] 503* [09:26] mvo: that would cause conffiles to be _not_ updated even though the user has not modified his local copy? [09:26] (you don't need to answer that) [09:26] zyga: I can't think of anything [09:27] thanks [09:27] PR snapcraft#1338 opened: go plugin: Add support for cross-compilation [09:51] PR snapd#3385 closed: cmd: add stub new snap-repair command and add timer [09:52] mvo: ^ I merged the basic snap-repair, I'll start working a bit on top of it today [09:53] pedronis: great, thank you [10:00] quick question, team: which of these is better? http://pastebin.ubuntu.com/24735919/ [10:07] Chipaca: I like the second [10:07] Chipaca, i'd take the last one but attach "(503 Service Unavailable)" after "network trouble" [10:07] ogra_: was thinking something similar [10:08] (in the two shorter ones, the big message is sent to the log) [10:08] so maybe it's ok not to tell the user anything more than the shortest message [10:08] as they can dig [10:08] but dunno [10:09] Chipaca: well I would argue that a 500 is not a network trouble [10:09] network trouble would make me think that my local router need a kick [10:09] pedronis: where does it stop being "network trouble"? [10:10] Chipaca: anything that gave you a http response is probably not network trouble [10:10] pedronis: at the user's router? their isp's? our isp? our dc? ...? [10:10] it might be cloud trouble tough :) [10:10] pedronis: i'm always open to suggestions :-D [10:10] Chipaca: "network trouble" is a completely unactionable message [10:11] pedronis: so is the other two [10:11] so are* [10:11] well, sorry, it's probably not just unactionable, it's ambgiously so [10:11] Chipaca: if you get a 50x you should probably say server somewhere, not network [10:12] i can change it to 'server trouble' :-) [10:12] 4xx are also server trouble, at this level [10:12] well usually they are [10:12] or we did something bad with snapd [10:12] but usually [10:12] not [10:12] anyway not something the user can fix [10:12] but he can report it [10:12] 'server trouble' works [10:13] 'server trouble (see logs for more info)'? [10:13] see journal* [10:13] Chipaca: anyway server trouble (something) seems still better to me [10:13] something like that ... if you dont want to show thee error code [10:13] it's either non relevant, but for people that can read it , it avoid an extra hop to the logs for nothing [10:14] Chipaca: related to this we should also remember to fix in client or cmd when we call snapd issues, server issues [10:14] zyga: https://build.opensuse.org/request/show/500355 [10:15] Chipaca: you could also say "store" troubles in most cases, I think there's also one thing we do that is not quite store but something else (userinfo) [10:15] pedronis: piano piano se va lontano [10:15] PR snapcraft#1339 opened: python plugin: install six before using setuptools [10:15] true! i like store troubles [10:15] morphis: looking [10:15] this is in store.go after all :-) [10:15] * Chipaca looks at auth.go just in case [10:16] Chipaca: most stuff we do in auth.go are also to talk to the store in the end, as I said userinfo.go is the only ambiguous one [10:16] is ssh keys in LP "store" ? [10:17] morphis: what is the ghost feature? [10:17] Chipaca: anyway we might also be about to conflict on this, I have also a big change for store.go (extracting retry to httputil) [10:18] pedronis: this is a really small and silly branch [10:18] so i don't mind either way [10:18] Chipaca: ok, need to fix a conflict and one test issue about testing DNS when we have a proxy around then my branch is reviewabale, will give a pointer soon [10:19] ok [10:19] pedronis: i too have a pr up fwiw [10:19] saw it [10:19] will look in a bit [10:19] ok [10:19] mentioning it because it touches store [10:20] although changes in store itself are minimal [10:22] zyga: hey, it seems you told that you told to aquarius_ that my recommendations wouldn't make them match the desktop theme [10:22] zyga: they do [10:22] didrocks: not in general, not always [10:22] I'm happy to explain to you how the desktop is working, but don't mislead the users based on our launcher work :) [10:22] zyga: x11, any desktop, if you have the theme in the snap, it will [10:22] not with the dark gtksettings though [10:22] which is the issue aquarius_ had [10:23] didrocks: maybe I misunderstand but can it universally mirror the theme of the classic distro? [10:23] (because they aren't in gsettings, but in another file) [10:23] zyga: if your snap embeeded them, yse [10:23] yes* [10:23] with matching version of the gtk one your are linked against [10:23] which is what the desktop-launcher does (with popular themes) [10:23] willcooke: FYI ^ [10:23] didrocks: right, but snaps cannot embed any and all themes [10:23] zyga: indeed [10:24] didrocks: sure, I agree, I just replied about the fact that snaps cannot do it perfect universally, not that it is not good for practical use [10:24] zyga: but that wasn't the issue here, aquarius_ knew about that limitation [10:24] yeah [10:24] we will need to have theme snaps [10:24] yes, I agree [10:24] and I have some ideas on how to do them already [10:24] but insufficient time to experiment and prototype [10:25] ensure you talk with the desktop teams who knows about the exact technics [10:25] because there isn't just one gsettings involved in the theme selction [10:25] selection* [10:25] didrocks: thank you, I will [10:25] didrocks: the idea is more general than any particular technology though, [10:25] meanwhile, I'll fix that gtksettings not share [10:26] zyga: you still need to work with existing toolkits :) [10:26] didrocks: I'm sure there are details to explore but the general idea is to offer theme snaps that would work with any snap using a given base [10:26] so, you need to know what the toolkits are reading and basing their theming on [10:27] Chipaca: snapd#3417 [10:27] PR snapd#3417: httputil,store: extract retry code to httputil, reorg usages [10:29] zyga: hum, where are the interfaces stored now? I apt-get source snapd and grep for "gsettings", but don't find any match [10:30] mvo: https://github.com/snapcore/snapd/compare/master...zyga:feature/detect-partial-updates?expand=1 [10:30] mvo: can you pleae eyeball the paths to ensure that's the right thing (before we waste CI slot) [10:30] didrocks, https://github.com/snapcore/snapd/blob/master/interfaces/builtin/gsettings.go ? [10:30] didrocks: you mean snapd interfaces? [10:30] interesting, I just apt-get source snapd… [10:30] didrocks: in snapd, in interfaces/builtin [10:31] ah, I probably don't have source for -updates [10:31] so very old snapd apt-get sourced ::p [10:31] :p [10:31] pedronis: is dropping context because YAGNI? [10:31] zyga, theme snaps ... just need to be able to content-share mount more than 1 content basically [10:32] zyga, and auto mount the available themes [10:32] seb128: we also need a way to tie this to base snaps [10:32] seb128: so those themes will apply to that base and not other bases [10:32] why? [10:32] seb128: as for sharing many things, yes, it's doable, just I would not do it via content, instead I'd make a new interface using the same internal logic, that handles N themes automatically [10:33] seb128: because other bases will use differnet toolchains, libc's and so on (fedora/suse/debian) [10:33] seb128: so each theme needs to associate itself with a bse [10:33] seb128: as a indicator of ABI [10:34] seb128: this will also let us have themes that need different ABIs for series-18 [10:34] seb128: this is very similar to how flatpack does it now, they tie this to runtimes which is exactly how our bases are [10:34] zyga, themes are mostly css files and icons [10:34] seb128: in general, they can be anything [10:34] that's not specific to toolchains [10:34] Chipaca, mvo: https://github.com/snapcore/snapd/pull/3421 [10:34] PR snapd#3421: errtracker: include hints of partial dpkg update in error reports [10:35] PR snapd#3421 opened: errtracker: include hints of partial dpkg update in error reports [10:35] JamieBennett: ^^ this will help us test the theory on failures, this will also be cherry-picked into the next stable release [10:36] JamieBennett, mvo: I will skip standup today, I need to go somewhere and I used up 90% of my mobile traffic by having last few hangouts on them by accident :/ [10:37] mvo: if you are going to land that branch or any fixes, quash them for easier cherry pick please [10:40] zyga: http://people.canonical.com/~jj/linux+jj/ the kernel there should have a fix for the trace back you were seeing the other day [10:41] Chipaca: well, we were passing to TODO mostly to satifsfy doRequest which uses it but only for download [10:42] Chipaca: so yes until we have something better to pass than TODO, I think this is saner [10:43] when we have we can decide what to do with it [10:43] Chipaca: having lunch, will look at your branch afterward [10:49] jjohansen: is that fix also in stable kernels in ubuntu? [10:50] jjohansen: hey :-) === elfgoh_ is now known as elfgoh [11:00] PR snapcraft#1340 opened: state: save all the build packages as global [11:17] mvo: hey, when you are ready for it to be reviewed, can you make sure you ask for me to be a reviewer of the bpf changes? also, did I hear something about a kernel traceback or eperm when trying to load the bpf cache? [11:24] mvo: I mention it cause of no new privs. you have to make sure you get that right [11:24] I see that yesterday you added a commit surrounding that, so maybe you figured it out already :) [11:27] PR snapcraft#1339 closed: python plugin: install six before using setuptools [11:33] PR snapcraft#1341 opened: Release changelog for 2.30.1 [11:39] PR snapcraft#1342 opened: nodejs: run install and commands in source-subdir [11:42] PR snapcraft#1343 opened: go plugin: Cross compile with CGo [11:48] pstolowski: we are not super consistent already about using http.Status* constants, we maybe should be but then I don't know if to fix everything in this branch or do a follow up [11:49] Chipaca: pstolowski: this kind of code is interesting httpStatusCode/100 == 4 [11:51] pedronis: it makes me uncomfortable but that's not an objective objection [11:51] so it went in [11:52] Chipaca: yes, but it cannot stand if we want to use http.Status* consistently [11:52] (I personally don't care too much either way) [11:53] pedronis: sure it can: return httpStatusCode/http.StatusCodeContinue == 4 [11:53] no [11:53] :-D [11:53] the 4 cannot be there [11:53] ah, rats [11:53] pedronis: actually they aren't typed [11:53] afaik [11:53] so [11:53] ? [11:54] maybe i misunderstand what you mean then [11:54] anyway, i also don't mind one way or the other tbh (consistency wins) [11:54] I mean that code wants you to know about 40x vs BadRequest [11:54] Chipaca: we are not consistent atm either way [11:54] PR snapcraft#1253 closed: go plugin: cross-compilation support [11:54] i know [11:54] I'm been asked to be a bit more, but is a rat nest [11:54] so yes something needs to be done :-) [11:55] well, s/needs to/should/ [11:55] to be completely honest I think it is easier to be consistent about not using http.Status* [11:55] but oh well === chihchun is now known as chihchun_afk [12:00] jdstrand: yeah, I need to polish that code but I think the building blocks are ther enow, the testsuite passes except for the bit where the input name changed [12:02] PR snapd#3419 closed: interfaces: partly revert aace15ab53 to unbreak core reverts [12:03] Chipaca: pstolowski: so can do but probably it's a separate PR, I spotted about ~20 places that would need to change and they are not already touched by the PR [12:03] (if we don't consider tests, I'm not sure it's worth the effort there) [12:04] s/they are not/they are not all/ [12:10] Chipaca, lol @ return httpStatusCode/http.StatusCodeContinue == 4 [12:11] pstolowski, pedronis, mvo: I need a 2nd review on https://github.com/snapcore/snapd/pull/3421 [12:11] PR snapd#3421: errtracker: include hints of partial dpkg update in error reports [12:11] pedronis, ok, that's fine, I didn't know there are more places affected [12:11] zyga: yeah, looking at it now [12:13] mvo: thank oyu [12:13] I have one more on top that detects re-exec [12:13] once those two land I'll propose cherry-picks into release/2.26 [12:15] zyga: re-exec we can infere from the build-ids, but making it explicit is nicer [12:16] mvo: cool, thanks [12:17] mvo: I said as much in the commit message :D [12:18] pstolowski: Chipaca: I put back the context to retryRequestDecodeJSON [12:22] pedronis, thank you [12:23] zyga: haha, ok [12:26] mvo: replied [12:32] mvo: I'll gladly iterate if you can look at the response [12:34] jdstrand: can we do something about the snap-confine apparmor file? could we get it out of /etc/apparmor.d and not make it a conffile? [12:34] zyga: yeah, just looking and experimenting a bit [12:34] zyga: curious about the idea that it might be a half-installed snapd [12:34] mvo: aha, thank you [12:34] zyga: and I'm trying to reproduce this [12:35] zyga: I mean, break it intentionally and see what happens [12:35] mvo: half-updated, [12:35] zyga: yeah, I wonder if we should have a daemon in this case [12:35] mvo: (pardon my ingorance of dpkg terms) [12:35] zyga: no problem [12:35] mvo: as the problem would be different if the profile was not loaded (at all) yet [12:36] zyga: I mean, the daemon is started in postinst, so if the upgrade did not quite work it depends on where it breaks etc, this is what I want to check, its an interessting angle to the problem [12:36] zyga: aha, indeed [12:36] mvo: would the socket be disabled during update? [12:36] Chipaca: I reviewed your branch, but I have a question/wondering there [12:36] mvo: because if not then snap install will just wake everything up [12:36] zyga: plus I wonder if we can simply move the file to a different place, the more I think about it, the more I get the feeling no user should mess aorund with it, i.e. it should not be a conffile [12:37] zyga: yeah, interessting [12:37] mvo: yes, (except for special cases which could become core config later) [12:37] mvo: but even moving it around is dangerous [12:37] mvo: as the presence of any backups, .dpkg-{old,new} files will make apparmor load that on boot [12:38] mvo: so depending on racing time we may load the wrong order and overwrite [12:38] zyga: indeed, if we move it we need to pull out the big guns to get rid of it [12:38] mvo: (hence my initial suggestion to use a classic manager to ensure it is sane) [12:38] mvo: yes [12:38] zyga: it seems like the apparmor init.d loads etc first and then the snaps dir [12:38] mvo: oh, that would be nice, we could just stick it in our directory [12:39] mvo: anyway, I would appreciate swiftness as release time runs out [12:40] mvo: we can iterate on making this nicer next [12:47] mvo: yes, it could not be a conffile. the trick would be to make sure it is loaded before snap-confine is run [12:49] jdstrand: I think mvo's observation is sufficient, it will be loaded by the same mechanism as today [12:49] jdstrand: and even presence of any old/stale files will not [12:50] ... not affect this as it will be over-written shortly thereafter (in the kernel) [12:50] mvo: not that there are complications associated with moving it-- if the old file is still around, it could be loaded, possibly after the one that is somewhere else, so it needs to be always removed. also, nfs /home and some other users are modifying the profile to workaround issues with the profile. we'd need to introduce a mechanism for people to apply workarounds. Thankfully, we can use the #include local/... ideas for that [12:50] s/^not/note/ [12:50] jdstrand: yeah, I noticed the local/ idea [12:51] jdstrand: plus it looks like I need to maually add the postinst/postrm snippets that dh-appamor would generate for me? [12:51] zyga: I'm not 100% sure snapd will be happy with something not named snap.* in that dir [12:51] mvo: yes [12:52] jdstrand: aha, interesting, [12:52] jdstrand: well, we can look for a way [12:53] mvo: there is a question of upgrades for people who have modified the profile in /etc and what to do with their changes. do we throw them away and just comment in the bug? do we try to migrate? etc [12:53] jdstrand: I like the local idea, where would that other file be? [12:53] mvo: yes, I'm afraid so [12:54] jdstrand: yes, that is thorny [12:54] jdstrand: my gut says the benefit outweights the costs and I would just migrate it and do a debconf prompt instructing them to migrate to local mechanism if we detect a modified conffile on migratoin [12:54] *migration [12:55] zyga: what other file? the main profile is wherever you decide. then it uses '#include local/snap-confine' (or whatever). local/ is relative to /etc/apaprmor.d, so /etc/apparmor.d/local. The include file could be an absolute path. eg, #include "/path/to/thing/to/include" [12:56] jdstrand: I must have misunderstood something, I thought we'd axe the /etc/apparmor.d file and place it in /var/lib/snapd/apparmor/profiles (or similar) and in that, immutable, new file we'd offer an include to somewhere writable [12:56] jdstrand: and since the new profile would always update we'd have to agree on a fixed location where local configuration can still be made [12:57] zyga: that is this idea. /etc/apparmor.d/local/snap-confine != /etc/apparmor.d/usr.lib.snapd.snap-confine [12:57] jdstrand: ah, I see [12:57] jdstrand: makes sense [12:57] zyga: if you don't like /etc/apparmor.d/local/snap-confine, then use an absolute path include [12:57] no, I was just confused about the detail, it's fine [12:57] but /etc/apparmor.d/local is an established mechanism, so it is perhaps the right location [12:58] * zyga sighs at 2fa that he left at ome [12:58] and about forgetting "h" due to spanish influence [12:59] the nicest thing to do would be to detect additions to /etc/apparmor.d/usr.lib.snapd.snap-confine.real and plop them in /etc/apaprmor.d/local/snap-confine [12:59] mvo: I cannot join the stand-up [12:59] hehe regarding 'h' [12:59] I don't have my token with me :/ [12:59] zyga: no worries [13:00] mvo: maybe I could on m phone, one sec [13:01] nope [13:01] same 2fa [13:01] well [13:01] sorry [13:02] mvo: my update for today: investigating the lock issue, testing and experimenting with those snaps in different environments [13:02] zyga: thanks! [13:02] mvo: confirmed with juju team that the snap works ok normally, doesn't run inside docker (runs alongside) [13:02] mvo: drawfted two branches, the one for dpkg-new detection and a small one on top for did-re-exec detection [13:03] mvo: I need to test a suse PR that fixes one last thing we wanted to do to enter factory (postrm/purge script) [13:03] mvo: and that's pretty much it [13:03] mvo: I looked at your branch from last night but I didn't push it forward yet [13:04] zyga: I did some small tweaks to my seccomp-bpf branch but still plenty of room for improvements [13:04] zyga: thanks for the update [13:05] mvo: yes, I want to iterate on it but it has lower priority than this work for now [13:21] mvo: please let me know how to proceed on the extra error report data [13:22] mvo: I'm happy to do everything just let's agree on what [13:24] * zyga takes a brief break [13:29] zyga: md5sum I think we want [13:29] zyga: i.e. if we have a dpkg-new file we want a md5sum [13:30] zyga: otherwise I agree with you, nothing is really important and we can iterate later [13:31] mvo: OK [13:31] mvo: I'm on this [13:31] mvo: thank you! [13:32] zyga: thank you [13:33] zyga: and then we need to explore what to do to stop making snap-confine.real a conffile [13:33] zyga: or we just ref it everytime we change it ;) snap-confine.1 [13:33] .2 [13:33] etc [13:37] mvo: haha, and endure the eternal conffile migrations in the maintscripts :) [13:38] mvo: but I think this is the path forward to ensure sanity [13:38] (not the maintscript, just moving away from /etc) [13:38] plus [13:38] mvo: one less file in /etc :D [13:38] exactly [13:52] niemeyer: this is what we concluded I think about status codes: https://forum.snapcraft.io/t/numeric-http-status-codes-vs-http-status-constants/860 [13:52] hi all: classic-support interface is for ubuntu-core or classic? [13:54] NicolinoCuralli: it's for the "classic" snap only, used to have a classic enviroment on Ubuntu Core [13:54] NicolinoCuralli: 'classic' is unfortunately super overloaded. that is only for the 'classic' snap [13:55] oki [13:55] thanks a lot [13:55] it's not used on classic system or by classic confinement snaps [13:55] ok: i understand it [13:55] and yes classic is quite the overloaded term these days [13:55] :) [13:57] we should usually add a 2nd word there I suppose as I did here, unless the context is super clear [13:57] all those terms try to communicate something in the same mental area ('old world usage patterns) but it gets a bit difficult to talk about the different technologies for all the different angles [13:57] pedronis: indeed [14:01] jdstrand: niemeyer: maybe we should have a "The meanings of classic" forum post [14:01] lol [14:01] pedronis: That's a good idea actually [14:01] that will end like a philosophical pamphlet :) [14:02] * ogra_ thinks we should rename one of the "classics" ... [14:02] pedronis: Or perhaps "The meaning of classic" [14:02] pedronis: It means just one thing really.. we just use it in several different places [14:12] PR snapd#3422 opened: tests: take into account staging snap-ids for snap-info [14:32] mvo: updated [14:34] mvo: I simplified it a bit [14:34] mvo: and the extra hashes will help in de-duplicating issues [14:34] mvo: we'll know measure hashes of both the current and the .dpkg-new (if any) files [14:35] mvo: so we can check which profile is really used, which will let us test my theory reliably [14:36] mvo: I'll start the 2.26 based branches and PRs now [14:37] zyga: thanks, if we squash the merge we can just cherry pick in 2.26 trivially [14:38] zyga: lets use md5sum, thats the same that dpkg is using, this way we can easily compare with the dpkg metadata instead of having to unpack it [14:38] zyga: for snapConfineProfileDigest() [14:38] zyga: let me comment in the pR [14:40] ahasure [14:40] one sec [14:41] smart idea btw :) [14:41] zyga: thanks, one tiny bikesheed naming suggestion then its good to go [14:42] mvo: sure, let me look [14:42] haha, I called it like that [14:43] but I reverted to not reflow the block [14:43] but no worries :) [14:43] zyga: hahaha [14:43] I like it better this way [14:47] mvo: tweaked [14:51] mvo: re, +1 if you like it, I'll work on other branches [14:52] mvo: when do you want to do 2.26.x today? [14:55] * Chipaca has returned from the dentist expedition [14:55] pedronis: "50 shades of classic"? [14:56] zyga_: I'm still keen to do a 2.26.x today [14:57] excellent [14:57] fgimenez: I think we have a new core in edge, could you please run the ntested test again? [14:57] so we do a .candidate today? [14:57] or beta? [14:57] hmm, can i force re-exec somehow to avoid ever ending up with the deb based setup ? [14:57] mvo: btw, you said you would cherry pick it [14:58] (like i can force no-reexec, just the other way around) [14:58] mvo: I can do that too (but feel free, just want to know if I should open brnaches) [14:58] ogra_: not today, no; we have a preference and we check for viability [14:58] ogra_: look at cmd/cmd.go [14:59] zyga_, i fear the docker stuff with snap-confine will only work if it is actually re-execed ... [14:59] zyga_: probably only beta [14:59] which works by sheer luck because i force install the edge core now [14:59] mvo: sure on it thx [14:59] (so i can be sure to be newer than the deb) [14:59] zyga_: cherry pick should be fine once its in master [15:00] but if i wanted to use the stable core it might jump back and forth on container startup [15:01] (depending if the deb sanpd is newer than the one in core or not) [15:12] Chipaca: welcome back, we chatted in the standup about http status codes: https://forum.snapcraft.io/t/numeric-http-status-codes-vs-http-status-constants/860 [15:15] ha! [15:15] * ogra_ is using the snapcraft snap inside a docker container to build a snap [15:16] awesome :D [15:17] pstolowski: btw when you have a bit of time could you finish the review/revote on snapd#3417 given what we discussed in the standup [15:17] PR snapd#3417: httputil,store: extract retry code to httputil, reorg usages [15:17] pedronis, i approved already a few minutes ago [15:21] PR snapd#3422 closed: tests: take into account staging snap-ids for snap-info [15:22] pstolowski: ah, thank you [15:22] I missed the email [15:22] saw it now [15:23] ogra_: but was that docker container from the docker snap :p [15:23] pstolowski: btw, related to it, I noticed we use the defaultRetryStrategy also for download, shouldn't we use a slightly longer overall timeout there? [15:23] cwayne, hah, no, i dont think that will work [15:24] cwayne, only the docker.io deb [15:24] not with that attitude it won't! [15:24] heh [15:25] i need to break all confinement to make that work ... i doubt that can work with the docker snap wrapped around it [15:25] pedronis, hmm, fair point, although I'd say if we timeout it's at the beginning when download is initiated usually [15:26] though i might .... in an insane moment ... try it :P [15:26] pstolowski: yes, I don't think we should make 30 mins because some snaps are big [15:26] just sayign it might need to be a bit more, some small multiple of the default one [15:26] ogra_: that's the spirit :D [15:38] mvo: core-revert succeeded with latest edge core snap \o/ [15:39] fgimenez: excellent! [15:40] mvo: http://paste.ubuntu.com/24738597/ [15:41] PR snapd#3421 closed: errtracker: include bits of snap-confine apparmor profile [15:42] zyga_: looks like we need a backport of 3421 - it can not be cleany cherry picked, if you could do that, that would be great [15:42] mvo: absolutely, on it [15:42] PR snapcraft#1344 opened: git: don't use --remote for updating submodules [15:45] mvo: one for your consideration https://github.com/snapcore/snapd/pull/3423 [15:45] PR snapd#3423: errtracker: report if snapd did re-execute itself [15:45] PR snapd#3423 opened: errtracker: report if snapd did re-execute itself [15:49] mvo: the backport https://github.com/snapcore/snapd/pull/3424 [15:49] PR snapd#3424: errtracker: include bits of snap-confine apparmor profile (#3421) [15:49] zyga_: \o/ [15:49] PR snapd#3424 opened: errtracker: include bits of snap-confine apparmor profile (#3421) [15:50] PR snapd#3424 closed: errtracker: include bits of snap-confine apparmor profile (#3421) [15:50] mvo, could you please retrigger the failed tests in the econnreset MP? [15:50] cachio: I can [15:50] zyga_, tx [15:52] done [15:53] zyga_, tx [15:53] zyga_, could you trigger this one too? [15:53] https://github.com/snapcore/snapd/pull/3405 [15:53] PR snapd#3405: tests: fix for upgrade test when it is repeated [15:55] done [15:57] zyga_, tx [15:57] :) [15:58] we need a 2nd review on https://github.com/snapcore/snapd/pull/3423 [15:58] PR snapd#3423: errtracker: report if snapd did re-execute itself [16:01] Chipaca: thanks, I was puzzled by the not-found change [16:01] zyga_: btw. did you talk with anyone at suse about the global /snap dir? === om26er is now known as om26er_ === om26er_ is now known as om26er [16:02] pedronis: blinkers on [16:02] kyrofa, elopio, https://launchpad.net/ubuntu/zesty/+queue?queue_state=1&queue_text=snapcraft [16:07] sergiusens_: did you see the bit of my review about dropping the extra Conflicts: python3-click-cli? [16:16] cjwatson: I saw the one from around 10hs ago and pushed a revert of that, haven't seen newer comments, let me check now [16:17] cjwatson: yeah, I think I addressed your comment already, thanks for finding that slip === sergiusens_ is now known as sergiusens [16:18] sergiusens: you did the PkTransactionPast one but not the Conflicts [16:18] PR snapcraft#1345 opened: deltas: Improved message returned when delta is too big [16:19] I made two relevant comments :) === pbek_ is now known as pbek [16:23] oh, I didin't seem to notice that one [16:23] * sergiusens goes and reads again [16:23] pedronis: replied to your comment [16:25] zyga_: ah, I woudl return yes and no, and rename the field , I find the "" one a bit confusing [16:26] zyga_, you wrote the "Building the code locally" part in snapd's HACKING.md file... question: is there a way to compile it and run it *from the project dir*, not installing it in the system? [16:26] zyga_, all I want to do is to hit staging environment with snapd for some tests [16:27] (is there an easier way than re-building?) === jcamino100080 is now known as jcamino [16:38] sergiusens: thanks, approved now. it should be landable with bileto if you still remember how to use that :-) [16:38] Facu: o/ [16:38] Facu: GOPATH=~/canonical/snappy go build -v -i -o /tmp/srv github.com/snapcore/snapd/cmd/snapd && sudo ~/bin/run-snapd-srv [16:38] Facu: if all you're wanting to do is hit the store, that ^ does it [16:38] cjwatson: I was lucky to escape it early, but I was going to ask slangasek if it was still functional and the preferred mechanism still [16:38] thanks for looking [16:39] I'll work on getting it in [16:39] Facu: unless what you're tweaking is the client, in which case you can just go run it [16:39] yeah, it's how the last version was landed at any rate [16:39] I don't know if quadruple landings are a thing though, possibly not, you might need multiple MPs [16:39] bileto is in maintenance mode but still functional [16:40] quadruple landings should not be a thing, the multiple-series landings have always caused a bit of friction against the archive and I think we've done away with them post-zesty [16:40] righto [16:44] zyga_, do you know if it is possible to see the all the ci results for a specific branch? [16:44] in travis [16:49] cachio: yes, always if it was tested [16:49] cachio: which branch are you thinking about? [16:49] pedronis: sure, I will make the changes, thank you for clarifying that [16:49] zyga_, sergiocazzolato:tests-fix-upgrade-basic [16:49] Facu: yes, although there are many loopholes and complexity [16:50] Facu: which branch do you want to try? [16:50] Facu: depending on where the changes are you need to do more or less things [16:50] zyga_, just master, but wait, Chipaca is currently helping me (and teaching me some go stuff) [16:51] mbuahaha [16:51] Facu: you can just switch to edge :) [16:51] Facu: use the edge channel luke ;-) [16:51] Facu: sure I saw, I just reply in the backlog order [16:52] zyga_, snapd from edge (snap install snapd --channel=edge) uses staging? [16:52] Facu: ah, staging store? [16:52] Facu: no [16:52] nope [16:52] ok [16:56] slangasek: cjwatson: there seems to be no series registered for click to land in older releases, should I get this in artful with bileto and go the traditional debsource route for the SRU? [17:03] sergiusens: that's fine by me [17:07] zyga_, for the record, all I needed to do to run snapd against staging (plus some debugging I wanted) is [17:07] sudo systemctl stop snapd.service snapd.socket [17:07] sudo SNAP_TESTING=1 SNAPD_DEBUG=1 SNAPD_DEBUG_HTTP=7 SNAPPY_USE_STAGING_STORE=1 /usr/lib/snapd/snapd [17:13] :-) [17:13] nice [17:14] zyga_, ...which doesn't really work, because you can not cross assertions from prod and staging, I'm currently learning [17:15] zyga_, so or you start a VM that never used prod before, or you clean up somehow what snapd has stored [17:15] Facu: ah, indeed [17:15] Facu: test builds use different key [17:16] zyga_, I'm not using a test build, but the system's snapd [17:17] right [17:20] pedronis: updated [17:21] pedronis: I'll land it and cherry pick for 2.26 when it goes green [17:21] * zyga_ works on snap-confine patches now [17:31] mvo: backport https://github.com/snapcore/snapd/pull/3425 [17:31] PR snapd#3425: errtracker: report if snapd did re-execute itself [17:32] PR snapd#3425 opened: errtracker: report if snapd did re-execute itself [17:46] PR snapd#3425 closed: errtracker: report if snapd did re-execute itself [17:49] zyga_: no, that fix is not in any ubuntu kernel, it was done yesterday [17:49] jjohansen: aha, interesting! [17:49] jjohansen: thank you! [17:50] jjohansen: btw, not sure if you noticed but we ran into an apparmor oops, just trying to recall who saw this :/ [17:50] ogra_: was that you/ [17:52] zyga_: me [17:54] mvo: bug?, oops trace back? anything for me to look at? [17:54] mvo: ah, thank you :) [17:56] Folks, I got a heads up from Linode that they've blocked our use of the API due to volume [17:56] Expect broken tests until I sort that out [17:58] niemeyer: thanks for the heads up! [17:58] jjohansen: let me search, I only have the bits that I gave to zyga_ the other day, it was actually me using seccomp inappropriately so it may well be not a real apparmor bug [17:58] niemeyer: volume of calls we make? [17:58] ah [17:58] jjohansen: http://paste.ubuntu.com/24724666/ this is what I have, I think prctl(PR_SET_NO_NEW_PRIVS, 1, 0, 0, 0) = 0 was actually the problem, i.e. I was incorrectly using that [17:58] jjohansen: I recall now [17:58] zyga_: Yep, spread [17:59] jjohansen: aa change onexec would fail and we'd oops [17:59] jjohansen: but looks like a bug in the kernel [18:00] mvo, zyga_: ah yep the kernel I built yesterday should fix that http://people.canonical.com/~jj/linux+jj/ [18:00] jjohansen: great, thank you - that is all I have encountered [18:22] I guess I'll need to write a new spread backend to a cloud where 80 machines isn't much.. :( [18:27] niemeyer, are you gonna try with aws? or it is much expensive? [18:27] It is more expensive and has other associated issues in terms of book keeping [18:28] But everything may be workarounded [18:28] rackspace is another good alternative [18:28] On Linode we get a machine descent enough to run our tests for a whole month for 10 bucks [18:28] not sure about the cost [18:29] If I have to leave a non-mainstream provider, I'm not going anywhere else but AWS or GCE [18:29] niemeyer: heh, that's pretty depressing [18:29] niemeyer: did we abuse the API somehow? [18:29] Where I'm sure we'll be small bees [18:29] zyga_: We did in their opinion [18:29] niemeyer: by creating/destroing machines? [18:30] zyga_: Yep [18:30] zyga_: It's about volume [18:30] zyga_: They're being dumb.. it was fine when we had 20 machines [18:30] niemeyer: interesting, I wonder if that has greater impact than just running a vm 24/7 [18:30] niemeyer: one more option, standardize on qemu [18:30] zyga_: Now that we have 80, it's 4 times as much [18:30] From the same account [18:30] niemeyer: get a few huge VMs [18:30] niemeyer: and let them proxy our spread protocol [18:30] niemeyer: so instead of hitting linode [18:31] niemeyer: we'll hit any host with a qemu helper [18:31] niemeyer: you already have the backend written [18:31] niemeyer: and it's just the proxying that needs doing [18:31] +/- details ;-) [18:31] zyga_: Heh.. it's just code that needs doing, yes.. like anything else [18:31] zyga_: Part of the goal here is for the system to sustain itself without maintenance [18:32] Almost entirely, at least [18:32] niemeyer: or do what I did, I bought a 24 thread workstation with 48 GB of ram for a few 100 on ebay (not for me) [18:32] Nobody babysits our machines on a daily basis [18:32] niemeyer: there is no cloud, it's just someone's computer :D [18:32] zyga_: This needs _maintenance_ [18:32] niemeyer: yes that's true :/ [18:32] zyga_: I get news about broken hardware on those 80 machines all the time [18:32] broken hardware?! [18:32] It's a non-issue for us [18:32] Yep [18:33] as in virtually broken virtual hardware or really broken linode metal? [18:33] zyga_: Hardware breaks surprisingly often when you get to high numbers [18:33] wow, linode needs to dust their cloud more often [18:33] heheh [18:33] or use less soap and water :D [18:33] zyga_: Again, this is *normal* [18:34] interesting, I never ran anything of this size [18:34] and actually this is tiny size by cloud standards [18:34] zyga_: Think about percentages [18:34] I can recall one email I got from my provider when they just told me that they migrated my VM from data centre to data centre for maintenance, without stopping the VM [18:35] but since the performance was slower for that minute it took, they notified me [18:35] (and two months in advance AFAIR) [18:35] anyway [18:35] sad linode day :-( [18:35] * zyga_ hugs niemeyer for the amazing spread that lets us do stuff we couldn't dream of before [18:35] Yeah.. I'm not super optimistic] [18:35] remember when every test was written in go and mocking? [18:36] Yeah, I predict a weekend lost here [18:36] and we had x50 fewer of those [18:36] mvo: ^^ you may need to merge manually for the release [18:37] niemeyer: what's the chance of using canoincal cloud for that? [18:38] niemeyer: could we get ~100 machines that just work? [18:38] zyga_: Pretty low.. we want completely open firewalls [18:38] ah, indeed [18:38] well [18:38] maybe google or azure :) [18:38] I don't know their pricing though [18:39] but yeah, more coding on the glue [18:39] maybe write a blog post [18:39] about linode being terrible and ugly and unfair [18:39] and snapd needing a better cloud provider [18:39] maybe someone will notice [18:39] (we can help spread the news) [18:40] I used to create up to 1000 machines for performance test in aws [18:40] niemeyer: out of curiosity, what is the limit (numeric) that we exceeded? [18:40] and I never had an internet problem or any other kind of problem [18:40] performance test/botnet [18:40] * genii ducks [18:41] zyga_: None.. "you seem to be creating problems so we blocked you" [18:42] cachio: To be fair, AWS doesn't even let us create 1000 machines in general.. [18:42] cachio: We had to get a special permit to do that when we needed that once [18:42] niemeyer: have you looked at ovh at all? [18:42] niemeyer, well, I asked for that [18:42] they were slightly mean but i think we worked through it [18:43] by defaul there is a limit abou 100 or something like that [18:45] the main problem is that I was expensive [18:46] it was expensive [18:52] 80 instances 8 hours/day 2vcpu 4GB ram = $917 monthly [18:54] * zyga_ dreams about those ~200 core workstations [18:59] cachio: Yeah, so it's not quite fair to say it was painless :) [19:05] Ban lifted.. [19:05] Still discussing details, but it should be unblocked righ tnow [19:13] ogra_: http://biosrhythm.com/?page_id=1453 [19:15] * zyga_ has to break now [19:17] PR snapd#3423 closed: errtracker: report if snapd did re-execute itself [19:18] PR snapd#3426 opened: release: 2.26.4 [19:39] niemeyer: oh my! (blocked our use cause of volume). godspeed [19:52] jdstrand: Yeah, apparently the API is not very used there.. [19:53] We'll sort it out.. [19:53] Found someone more accessible and happy to discuss the use case === elfgoh_ is now known as elfgoh [20:58] hello, I'm having some issues on making an snap that uses kf5-frameworks as a part can someone help me? [23:16] hello [23:17] how can i enable a local terminal in my raspberry pi. i want to run an XServer in the hdmi port but by default all ubuntu core lets me do is ssh remotely, where i get permission problems trying to open the display [23:31] Hilikus: not sure i understand what you mean by 'local' terminal? I'm guessing that by default there is no x server running, or it's confined possibly? [23:32] nacc: I installed a chromium package that comes with an X server, but because ubuntu core only lets me connect via ssh, i have the extra problems of authenticating. if i can connect to a local tty instead of via ssh then maybe i'll have better luck [23:32] basically, i don' [23:33] Hilikus: 'extra problems of authenticating'? you mean you can't ssh in? [23:33] 't want to deal with the remote XServer issues [23:33] no, i can ssh just fine [23:33] but i can't forward the raspberry X session to my desktop [23:34] Hilikus: i assume you mean you did `ssh -X` ? [23:34] yes [23:35] but even if that worked, i need to be able to show the X session locally, via the HDMI of the raspberry + with a local keyboard and mouse, that's why i want to enable a local terminal. every other server i've used in my life has local terminals but ubuntu core only has this message (without prompt) saying to ssh [23:37] Hilikus: i'm fairly sure by default core doesn't have any ttys (or gettys) running [23:37] i think so too. but do you know how to enable one? is there something i could install? [23:38] or do you have another idea how to get a local graphical session with a browser? it doesn't have to be XServer, mir o wayland would be ok. i just need a browser in a local session [23:39] Hilikus: I don't know, sorry [23:39] ok. thank you anyway [23:41] Hilikus: once you ssh in, do you see any X server running? `ps aux | grep X` [23:42] no [23:43] Hilikus: what snap did you install (what name)? [23:43] https://code.launchpad.net/~osomon/+snap/chromium-snap-beta [23:45] Hilikus: it looks like there are a lot of interfaces it depends on, are they all connected? [23:45] Hilikus: something like `snap interfaces` [23:48] mm actually no [23:48] Hilikus: this is a gray area for me -- i'm not sure running a full desktop is really the target, but it might be possible (I have no idea actually) [23:48] i didn;'t think iof that [23:48] Hilikus: you might need to connect the various plugs if they weren't connected, so that the snap can run [23:48] i don't want a full desktop. this is for a kind of kiosk, i do still need a graphical interface [23:49] Hilikus: something like `snap connect :` [23:49] Hilikus: oh ok, based upon the marketing on the core page (digital signage), that certainly seems likeit should be possible, but I don't know any of the details :/ [23:50] Hilikus: actual snappy folks should be around at some point, though :) you can ask again -- or maybe post on the forum? [23:51] ok, then for sure this won't work. your points made me realize this package wants to plug to unity7, not only X, so this package is for a hybrid system [23:52] Hilikus: yeah, i would expect chromium to be for the full fledged browser (meaning desktop integrated) [23:52] Hilikus: for kiosk, you want to basically write your own snap that is just your kiosk'd application (I think) [23:52] Hilikus: or did you mean your kiosk'd appliation is on a website? [23:53] yes, on a website [23:54] Hilikus: ah ok -- I don't immediately see any standalone browsers via `snap find`, but it might be worth looking [23:59] ive been looking for a week now with no luck :(