[07:19] <zyga> o/
[07:20] <pstolowski> morning zyga !
[07:34] <zyga> mvo: hey, no luck with lock issue yet; what makes me mad is that it happens in real time, every minute there are several reports
[07:35] <zyga> mvo: it might be one machine doing CI but we just don't know
[07:35] <zyga> mvo: I will try to inject some possible failures to see if I can replicate the error (yesterday all my attempts failed to reproduce it)
[08:36] <morphis__> zyga: I've seen the lock issue multiple times too on my system but lost that state already a while back, will tell you if I see it again
[08:37] <zyga> morphis__: during development or just use of snapd?
[08:37] <morphis__> use of snapd
[08:37] <morphis__> but it was a mixture of me doing multiple things, don't really remember
[09:13] <zyga> morphis: do you have it in snap changes?
[09:13] <zyga> morphis: any theory would be very valueble today
[09:13] <abeato> zyga, hey, anything left for https://github.com/snapcore/snapd/pull/3353 ?
[09:13] <mup> PR snapd#3353: Add support for reboot parameter <Created by alfonsosanchezbeato> <https://github.com/snapcore/snapd/pull/3353>
[09:13] <morphis> zyga: no, I've cleaned my system after that some time ago
[09:13] <morphis> zyga: sorry
[09:14] <zyga> abeato: no, let me approve it, sorry
[09:15] <zyga> mvo: (let's chat here)
[09:15] <zyga> mvo: interesting observation, the zesty kernel is now at 4.10.0-21 but we see errors with just -19
[09:15] <zyga> mvo: but snapd is 2.25
[09:15] <zyga> mvo: this feels like a fresh install but no reboot
[09:19] <abeato> zyga, thanks... there is an issue with the tests but seems to be the CI infra
[09:19] <zyga> abeato: done
[09:20] <abeato> great!
[09:24] <mvo> zyga: yeah
[09:25] <zyga> mvo: question
[09:26] <zyga> mvo: is there any way to update with apt/dpkg
[09:26] <Chipaca> gah
[09:26] <Chipaca> why do we turn a 504 into a 400 :-(
[09:26] <Chipaca> 503*
[09:26] <zyga> mvo: that would cause conffiles to be _not_ updated even though the user has not modified his local copy?
[09:26] <Chipaca> (you don't need to answer that)
[09:26] <mvo> zyga: I can't think of anything
[09:27] <zyga> thanks
[09:27] <mup> PR snapcraft#1338 opened: go plugin: Add support for cross-compilation <enhancement> <Created by kalikiana> <https://github.com/snapcore/snapcraft/pull/1338>
[09:51] <mup> PR snapd#3385 closed: cmd: add stub new snap-repair command and add timer <Created by mvo5> <Merged by pedronis> <https://github.com/snapcore/snapd/pull/3385>
[09:52] <pedronis> mvo: ^ I merged the basic snap-repair, I'll start working a bit on top of it today
[09:53] <mvo> pedronis: great, thank you
[10:00] <Chipaca> quick question, team: which of these is better? http://pastebin.ubuntu.com/24735919/
[10:07] <mvo> Chipaca: I like the second
[10:07] <ogra_> Chipaca, i'd take the last one but attach "(503 Service Unavailable)" after "network trouble"
[10:07] <Chipaca> ogra_: was thinking something similar
[10:08] <Chipaca> (in the two shorter ones, the big message is sent to the log)
[10:08] <Chipaca> so maybe it's ok not to tell the user anything more than the shortest message
[10:08] <Chipaca> as they can dig
[10:08] <Chipaca> but dunno
[10:09] <pedronis> Chipaca: well I would argue that a 500 is not a network trouble
[10:09] <pedronis> network trouble would make me think that my local router need a kick
[10:09] <Chipaca> pedronis: where does it stop being "network trouble"?
[10:10] <pedronis> Chipaca: anything that gave you a http response is probably not network trouble
[10:10] <Chipaca> pedronis: at the user's router? their isp's? our isp? our dc? ...?
[10:10] <pedronis> it might be cloud trouble tough :)
[10:10] <Chipaca> pedronis: i'm always open to suggestions :-D
[10:10] <pedronis> Chipaca: "network trouble" is a completely unactionable message
[10:11] <Chipaca> pedronis: so is the other two
[10:11] <Chipaca> so are*
[10:11] <pedronis> well, sorry, it's probably not just unactionable, it's ambgiously so
[10:11] <pedronis> Chipaca: if you get a 50x you should probably say server somewhere, not network
[10:12] <Chipaca> i can change it to 'server trouble' :-)
[10:12] <Chipaca> 4xx are also server trouble, at this level
[10:12] <pedronis> well usually they are
[10:12] <pedronis> or we did something bad with snapd
[10:12] <pedronis> but usually
[10:12] <pedronis> not
[10:12] <pedronis> anyway not something the user can fix
[10:12] <ogra_> but he can report it
[10:12] <Chipaca> 'server trouble' works
[10:13] <Chipaca> 'server trouble (see logs for more info)'?
[10:13] <Chipaca> see journal*
[10:13] <pedronis> Chipaca: anyway  server trouble (something) seems still better to me
[10:13] <ogra_> something like that ... if you dont want to show thee error code
[10:13] <pedronis> it's either non relevant, but for people that can read it , it avoid an extra hop to the logs for nothing
[10:14] <pedronis> Chipaca: related  to this we should also remember to fix in client or cmd when we call snapd issues, server issues
[10:14] <morphis> zyga: https://build.opensuse.org/request/show/500355
[10:15] <pedronis> Chipaca: you could also say  "store" troubles in most cases, I think there's also one thing we do that is not quite store but something else (userinfo)
[10:15] <Chipaca> pedronis: piano piano se va lontano
[10:15] <mup> PR snapcraft#1339 opened: python plugin: install six before using setuptools <Created by kyrofa> <https://github.com/snapcore/snapcraft/pull/1339>
[10:15] <Chipaca> true! i like store troubles
[10:15] <zyga> morphis: looking
[10:15] <Chipaca> this is in store.go after all :-)
[10:15]  * Chipaca looks at auth.go just in case
[10:16] <pedronis> Chipaca: most stuff we do in auth.go are also to talk to the store in the end, as I said userinfo.go is the only ambiguous one
[10:16] <pedronis> is ssh keys in LP "store" ?
[10:17] <zyga> morphis: what is the ghost feature?
[10:17] <pedronis> Chipaca: anyway we might also be about to conflict on this, I have also a big change for store.go (extracting retry to httputil)
[10:18] <Chipaca> pedronis: this is a really small and silly branch
[10:18] <Chipaca> so i don't mind either way
[10:18] <pedronis> Chipaca: ok, need to fix a conflict and one test issue about testing DNS when we have a proxy around then my branch is reviewabale, will give a pointer soon
[10:19] <Chipaca> ok
[10:19] <Chipaca> pedronis: i too have a pr up fwiw
[10:19] <pedronis> saw it
[10:19] <pedronis> will look in a bit
[10:19] <Chipaca> ok
[10:19] <Chipaca> mentioning it because it touches store
[10:20] <Chipaca> although changes in store itself are minimal
[10:22] <didrocks> zyga: hey, it seems you told that you told to aquarius_ that my recommendations wouldn't make them match the desktop theme
[10:22] <didrocks> zyga: they do
[10:22] <zyga> didrocks: not in general, not always
[10:22] <didrocks> I'm happy to explain to you how the desktop is working, but don't mislead the users based on our launcher work :)
[10:22] <didrocks> zyga: x11, any desktop, if you have the theme in the snap, it will
[10:22] <didrocks> not with the dark gtksettings though
[10:22] <didrocks> which is the issue aquarius_ had
[10:23] <zyga> didrocks: maybe I misunderstand but can it universally mirror the theme of the classic distro?
[10:23] <didrocks> (because they aren't in gsettings, but in another file)
[10:23] <didrocks> zyga: if your snap embeeded them, yse
[10:23] <didrocks> yes*
[10:23] <didrocks> with matching version of the gtk one your are linked against
[10:23] <didrocks> which is what the desktop-launcher does (with popular themes)
[10:23] <didrocks> willcooke: FYI ^
[10:23] <zyga> didrocks: right, but snaps cannot embed any and all themes
[10:23] <didrocks> zyga: indeed
[10:24] <zyga> didrocks: sure, I agree, I just replied about the fact that snaps cannot do it perfect universally, not that it is not good for practical use
[10:24] <didrocks> zyga: but that wasn't the issue here, aquarius_ knew about that limitation
[10:24] <didrocks> yeah
[10:24] <didrocks> we will need to have theme snaps
[10:24] <zyga> yes, I agree
[10:24] <zyga> and I have some ideas on how to do them already
[10:24] <zyga> but insufficient time to experiment and prototype
[10:25] <didrocks> ensure you talk with the desktop teams who knows about the exact technics
[10:25] <didrocks> because there isn't just one gsettings involved in the theme selction
[10:25] <didrocks> selection*
[10:25] <zyga> didrocks: thank you, I will
[10:25] <zyga> didrocks: the idea is more general than any particular technology though,
[10:25] <didrocks> meanwhile, I'll fix that gtksettings not share
[10:26] <didrocks> zyga: you still need to work with existing toolkits :)
[10:26] <zyga> didrocks: I'm sure there are details to explore but the general idea is to offer theme snaps that would work with any snap using a given base
[10:26] <didrocks> so, you need to know what the toolkits are reading and basing their theming on
[10:27] <pedronis> Chipaca: snapd#3417
[10:27] <mup> PR snapd#3417: httputil,store: extract retry code to httputil, reorg usages <Created by pedronis> <https://github.com/snapcore/snapd/pull/3417>
[10:29] <didrocks> zyga: hum, where are the interfaces stored now? I apt-get source snapd and grep for "gsettings", but don't find any match
[10:30] <zyga> mvo: https://github.com/snapcore/snapd/compare/master...zyga:feature/detect-partial-updates?expand=1
[10:30] <zyga> mvo: can you pleae eyeball the paths to ensure that's the right thing (before we waste CI slot)
[10:30] <seb128> didrocks, https://github.com/snapcore/snapd/blob/master/interfaces/builtin/gsettings.go ?
[10:30] <zyga> didrocks: you mean snapd interfaces?
[10:30] <didrocks> interesting, I just apt-get source snapd…
[10:30] <zyga> didrocks: in snapd, in interfaces/builtin
[10:31] <didrocks> ah, I probably don't have source for -updates
[10:31] <didrocks> so very old snapd apt-get sourced ::p
[10:31] <didrocks> :p
[10:31] <Chipaca> pedronis: is dropping context because YAGNI?
[10:31] <seb128> zyga, theme snaps ... just need to be able to content-share mount more than 1 content basically
[10:32] <seb128> zyga, and auto mount the available themes
[10:32] <zyga> seb128: we also need a way to tie this to base snaps
[10:32] <zyga> seb128: so those themes will apply to that base and not other bases
[10:32] <seb128> why?
[10:32] <zyga> seb128: as for sharing many things, yes, it's doable, just I would not do it via content, instead I'd make a new interface using the same internal logic, that handles N themes automatically
[10:33] <zyga> seb128: because other bases will use differnet toolchains, libc's and so on (fedora/suse/debian)
[10:33] <zyga> seb128: so each theme needs to associate itself with a bse
[10:33] <zyga> seb128: as a indicator of ABI
[10:34] <zyga> seb128: this will also let us have themes that need different ABIs for series-18
[10:34] <zyga> seb128: this is very similar to how flatpack does it now, they tie this to runtimes which is exactly how our bases are
[10:34] <seb128> zyga, themes are mostly css files and icons
[10:34] <zyga> seb128: in general, they can be anything
[10:34] <seb128> that's not specific to toolchains
[10:34] <zyga> Chipaca, mvo: https://github.com/snapcore/snapd/pull/3421
[10:34] <mup> PR snapd#3421: errtracker: include hints of partial dpkg update in error reports <Created by zyga> <https://github.com/snapcore/snapd/pull/3421>
[10:35] <mup> PR snapd#3421 opened: errtracker: include hints of partial dpkg update in error reports <Created by zyga> <https://github.com/snapcore/snapd/pull/3421>
[10:35] <zyga> JamieBennett: ^^ this will help us test the theory on failures, this will also be cherry-picked into the next stable release
[10:36] <zyga> JamieBennett, mvo: I will skip standup today, I need to go somewhere and I used up 90% of my mobile traffic by having last few hangouts on them by accident :/
[10:37] <zyga> mvo: if you are going to land that branch or any fixes, quash them for easier cherry pick please
[10:40] <jjohansen> zyga: http://people.canonical.com/~jj/linux+jj/ the kernel there should have a fix for the trace back you were seeing the other day
[10:41] <pedronis> Chipaca: well, we were passing to TODO mostly to satifsfy doRequest which uses it but only for download
[10:42] <pedronis> Chipaca: so yes until we have something better to pass than TODO, I think this is saner
[10:43] <pedronis> when we have we can decide what to do with it
[10:43] <pedronis> Chipaca: having lunch, will look at your branch afterward
[10:49] <zyga> jjohansen: is that fix also in stable kernels in ubuntu?
[10:50] <zyga> jjohansen: hey :-)
[11:00] <mup> PR snapcraft#1340 opened: state: save all the build packages as global <Created by elopio> <https://github.com/snapcore/snapcraft/pull/1340>
[11:17] <jdstrand> mvo: hey, when you are ready for it to be reviewed, can you make sure you ask for me to be a reviewer of the bpf changes? also, did I hear something about a kernel traceback or eperm when trying to load the bpf cache?
[11:24] <jdstrand> mvo: I mention it cause of no new privs. you have to make sure you get that right
[11:24] <jdstrand> I see that yesterday you added a commit surrounding that, so maybe you figured it out already :)
[11:27] <mup> PR snapcraft#1339 closed: python plugin: install six before using setuptools <Created by kyrofa> <Merged by sergiusens> <https://github.com/snapcore/snapcraft/pull/1339>
[11:33] <mup> PR snapcraft#1341 opened: Release changelog for 2.30.1 <Created by sergiusens> <https://github.com/snapcore/snapcraft/pull/1341>
[11:39] <mup> PR snapcraft#1342 opened: nodejs: run install and commands in source-subdir <Created by Saviq> <https://github.com/snapcore/snapcraft/pull/1342>
[11:42] <mup> PR snapcraft#1343 opened: go plugin: Cross compile with CGo <Created by kalikiana> <https://github.com/snapcore/snapcraft/pull/1343>
[11:48] <pedronis> pstolowski: we are not super consistent already about using http.Status* constants, we maybe should be but then I don't know if to fix everything in this branch or do a follow up
[11:49] <pedronis> Chipaca: pstolowski: this kind of code is interesting httpStatusCode/100 == 4
[11:51] <Chipaca> pedronis: it makes me uncomfortable but that's not an objective objection
[11:51] <Chipaca> so it went in
[11:52] <pedronis> Chipaca: yes, but it cannot stand if we want to use http.Status* consistently
[11:52] <pedronis> (I personally don't care too much either way)
[11:53] <Chipaca> pedronis: sure it can: return httpStatusCode/http.StatusCodeContinue == 4
[11:53] <pedronis> no
[11:53] <Chipaca> :-D
[11:53] <pedronis> the 4 cannot be there
[11:53] <Chipaca> ah, rats
[11:53] <Chipaca> pedronis: actually they aren't typed
[11:53] <Chipaca> afaik
[11:53] <pedronis> so
[11:53] <pedronis> ?
[11:54] <Chipaca> maybe i misunderstand what you mean then
[11:54] <Chipaca> anyway, i also don't mind one way or the other tbh (consistency wins)
[11:54] <pedronis> I mean that code wants you to know about 40x vs BadRequest
[11:54] <pedronis> Chipaca: we are not consistent atm either way
[11:54] <mup> PR snapcraft#1253 closed: go plugin: cross-compilation support <Created by kalikiana> <Closed by kalikiana> <https://github.com/snapcore/snapcraft/pull/1253>
[11:54] <Chipaca> i know
[11:54] <pedronis> I'm been asked to be a bit more, but is a rat nest
[11:54] <Chipaca> so yes something needs to be done :-)
[11:55] <Chipaca> well, s/needs to/should/
[11:55] <pedronis> to be completely honest I think it is easier to be consistent about not using http.Status*
[11:55] <pedronis> but oh well
[12:00] <mvo> jdstrand: yeah, I need to polish that code but I think the building blocks are ther enow, the testsuite passes except for the bit where the input name changed
[12:02] <mup> PR snapd#3419 closed: interfaces: partly revert aace15ab53 to unbreak core reverts <Created by mvo5> <Merged by mvo5> <https://github.com/snapcore/snapd/pull/3419>
[12:03] <pedronis> Chipaca: pstolowski: so can do but probably it's a separate PR, I spotted about ~20 places that would need to change and they are not already touched by the PR
[12:03] <pedronis> (if we don't consider tests, I'm not sure it's worth the effort there)
[12:04] <pedronis> s/they are not/they are not all/
[12:10] <pstolowski> Chipaca, lol @ return httpStatusCode/http.StatusCodeContinue == 4
[12:11] <zyga> pstolowski, pedronis, mvo: I need a 2nd review on https://github.com/snapcore/snapd/pull/3421
[12:11] <mup> PR snapd#3421: errtracker: include hints of partial dpkg update in error reports <Critical> <Created by zyga> <https://github.com/snapcore/snapd/pull/3421>
[12:11] <pstolowski> pedronis, ok, that's fine, I didn't know there are more places affected
[12:11] <mvo> zyga: yeah, looking at it now
[12:13] <zyga> mvo: thank oyu
[12:13] <zyga> I have one more on top that detects re-exec
[12:13] <zyga> once those two land I'll propose cherry-picks into release/2.26
[12:15] <mvo> zyga: re-exec we can infere from the build-ids, but making it explicit is nicer
[12:16] <jdstrand> mvo: cool, thanks
[12:17] <zyga> mvo: I said as much in the commit message :D
[12:18] <pedronis> pstolowski: Chipaca: I put back the context to retryRequestDecodeJSON
[12:22] <pstolowski> pedronis, thank you
[12:23] <mvo> zyga: haha, ok
[12:26] <zyga> mvo: replied
[12:32] <zyga> mvo: I'll gladly iterate if you can look at the response
[12:34] <mvo> jdstrand: can we do something about the snap-confine apparmor file? could we get it out of /etc/apparmor.d and not make it a conffile?
[12:34] <mvo> zyga: yeah, just looking and experimenting a bit
[12:34] <mvo> zyga: curious about the idea that it might be a half-installed snapd
[12:34] <zyga> mvo: aha, thank you
[12:34] <mvo> zyga: and I'm trying to reproduce this
[12:35] <mvo> zyga: I mean, break it intentionally and see what happens
[12:35] <zyga> mvo: half-updated,
[12:35] <mvo> zyga: yeah, I wonder if we should have a daemon in this case
[12:35] <zyga> mvo: (pardon my ingorance of dpkg terms)
[12:35] <mvo> zyga: no problem
[12:35] <zyga> mvo: as the problem would be different if the profile was not loaded (at all) yet
[12:36] <mvo> zyga: I mean, the daemon is started in postinst, so if the upgrade did not quite work it depends on where it breaks etc, this is what I want to check, its an interessting angle to the problem
[12:36] <mvo> zyga: aha, indeed
[12:36] <zyga> mvo: would the socket be disabled during update?
[12:36] <pedronis> Chipaca: I reviewed your branch, but I have a question/wondering there
[12:36] <zyga> mvo: because if not then snap install will just wake everything up
[12:36] <mvo> zyga: plus I wonder if we can simply move the file to a different place, the more I think about it, the more I get the feeling no user should mess aorund with it, i.e. it should not be a conffile
[12:37] <mvo> zyga: yeah, interessting
[12:37] <zyga> mvo: yes, (except for special cases which could become core config later)
[12:37] <zyga> mvo: but even moving it around is dangerous
[12:37] <zyga> mvo: as the presence of any backups, .dpkg-{old,new} files will make apparmor load that on boot
[12:38] <zyga> mvo: so depending on racing time we may load the wrong order and overwrite
[12:38] <mvo> zyga: indeed, if we move it we need to pull out the big guns to get rid of it
[12:38] <zyga> mvo: (hence my initial suggestion to use a classic manager to ensure it is sane)
[12:38] <zyga> mvo: yes
[12:38] <mvo> zyga: it seems like the apparmor init.d loads etc first and then the snaps dir
[12:38] <zyga> mvo: oh, that would be nice, we could just stick it in our directory
[12:39] <zyga> mvo: anyway, I would appreciate swiftness as release time runs out
[12:40] <zyga> mvo: we can iterate on making this nicer next
[12:47] <jdstrand> mvo: yes, it could not be a conffile. the trick would be to make sure it is loaded before snap-confine is run
[12:49] <zyga> jdstrand: I think mvo's observation is sufficient, it will be loaded by the same mechanism as today
[12:49] <zyga> jdstrand: and even presence of any old/stale files will not
[12:50] <zyga> ... not affect this as it will be over-written shortly thereafter (in the kernel)
[12:50] <jdstrand> mvo: not that there are complications associated with moving it-- if the old file is still around, it could be loaded, possibly after the one that is somewhere else, so it needs to be always removed. also, nfs /home and some other users are modifying the profile to workaround issues with the profile. we'd need to introduce a mechanism for people to apply workarounds. Thankfully, we can use the #include local/... ideas for that
[12:50] <jdstrand> s/^not/note/
[12:50] <mvo> jdstrand: yeah, I noticed the local/ idea
[12:51] <mvo> jdstrand: plus it looks like I need to maually add the postinst/postrm snippets that dh-appamor would generate for me?
[12:51] <jdstrand> zyga: I'm not 100% sure snapd will be happy with something not named snap.* in that dir
[12:51] <jdstrand> mvo: yes
[12:52] <zyga> jdstrand: aha, interesting,
[12:52] <zyga> jdstrand: well, we can look for a way
[12:53] <jdstrand> mvo: there is a question of upgrades for people who have modified the profile in /etc and what to do with their changes. do we throw them away and just comment in the bug? do we try to migrate? etc
[12:53] <zyga> jdstrand: I like the local idea, where would that other file be?
[12:53] <zyga> mvo: yes, I'm afraid so
[12:54] <zyga> jdstrand: yes, that is thorny
[12:54] <zyga> jdstrand: my gut says the benefit outweights the costs and I would just migrate it and do a debconf prompt instructing them to migrate to local mechanism if we detect a modified conffile on migratoin
[12:54] <zyga> *migration
[12:55] <jdstrand> zyga: what other file? the main profile is wherever you decide. then it uses '#include local/snap-confine' (or whatever). local/ is relative to /etc/apaprmor.d, so /etc/apparmor.d/local. The include file could be an absolute path. eg, #include "/path/to/thing/to/include"
[12:56] <zyga> jdstrand: I must have misunderstood something, I thought we'd axe the /etc/apparmor.d file and place it in /var/lib/snapd/apparmor/profiles (or similar) and in that, immutable, new file we'd offer an include to somewhere writable
[12:56] <zyga> jdstrand: and since the new profile would always update we'd have to agree on a fixed location where local configuration can still be made
[12:57] <jdstrand> zyga: that is this idea. /etc/apparmor.d/local/snap-confine != /etc/apparmor.d/usr.lib.snapd.snap-confine
[12:57] <zyga> jdstrand: ah, I see
[12:57] <zyga> jdstrand: makes sense
[12:57] <jdstrand> zyga: if you don't like /etc/apparmor.d/local/snap-confine, then use an absolute path include
[12:57] <zyga> no, I was just confused about the detail, it's fine
[12:57] <jdstrand> but /etc/apparmor.d/local is an established mechanism, so it is perhaps the right location
[12:58]  * zyga sighs at 2fa that he left at ome
[12:58] <zyga> and about forgetting "h" due to spanish influence
[12:59] <jdstrand> the nicest thing to do would be to detect additions to /etc/apparmor.d/usr.lib.snapd.snap-confine.real and plop them in /etc/apaprmor.d/local/snap-confine
[12:59] <zyga> mvo: I cannot join the stand-up
[12:59] <jdstrand> hehe regarding 'h'
[12:59] <zyga> I don't have my token with me :/
[12:59] <mvo> zyga: no worries
[13:00] <zyga> mvo: maybe I could on m phone, one sec
[13:01] <zyga> nope
[13:01] <zyga> same 2fa
[13:01] <zyga> well
[13:01] <zyga> sorry
[13:02] <zyga> mvo: my update for today: investigating the lock issue, testing and experimenting with those snaps in different environments
[13:02] <mvo> zyga: thanks!
[13:02] <zyga> mvo: confirmed with juju team that the snap works ok normally, doesn't run inside docker (runs alongside)
[13:02] <zyga> mvo: drawfted two branches, the one for dpkg-new detection and a small one on top for did-re-exec detection
[13:03] <zyga> mvo: I need to test a suse PR that fixes one last thing we wanted to do to enter factory (postrm/purge script)
[13:03] <zyga> mvo: and that's pretty much it
[13:03] <zyga> mvo: I looked at your branch from last night but I didn't push it forward yet
[13:04] <mvo> zyga: I did some small tweaks to my seccomp-bpf branch but still plenty of room for improvements
[13:04] <mvo> zyga: thanks for the update
[13:05] <zyga> mvo: yes, I want to iterate on it but it has lower priority than this work for now
[13:21] <zyga> mvo: please let me know how to proceed on the extra error report data
[13:22] <zyga> mvo: I'm happy to do everything just let's agree on what
[13:24]  * zyga takes a brief break
[13:29] <mvo> zyga: md5sum I think we want
[13:29] <mvo> zyga: i.e. if we have a dpkg-new file we want a md5sum
[13:30] <mvo> zyga: otherwise I agree with you, nothing is really important and we can iterate later
[13:31] <zyga> mvo: OK
[13:31] <zyga> mvo: I'm on this
[13:31] <zyga> mvo: thank you!
[13:32] <mvo> zyga: thank you
[13:33] <mvo> zyga: and then we need to explore what to do to stop making snap-confine.real a conffile
[13:33] <mvo> zyga: or we just ref it everytime we change it ;) snap-confine.1
[13:33] <mvo> .2
[13:33] <mvo> etc
[13:37] <zyga> mvo: haha, and endure the eternal conffile migrations in the maintscripts :)
[13:38] <zyga> mvo: but I think this is the path forward to ensure sanity
[13:38] <zyga> (not the maintscript, just moving away from /etc)
[13:38] <zyga> plus
[13:38] <zyga> mvo: one less file in /etc :D
[13:38] <mvo> exactly
[13:52] <pedronis> niemeyer: this is what we concluded I think about status codes:  https://forum.snapcraft.io/t/numeric-http-status-codes-vs-http-status-constants/860
[13:52] <NicolinoCuralli> hi all: classic-support interface is for ubuntu-core or classic?
[13:54] <pedronis> NicolinoCuralli: it's for the "classic" snap  only, used to have a classic enviroment on Ubuntu Core
[13:54] <jdstrand> NicolinoCuralli: 'classic' is unfortunately super overloaded. that is only for the 'classic' snap
[13:55] <NicolinoCuralli> oki
[13:55] <NicolinoCuralli> thanks a lot
[13:55] <pedronis> it's not used on classic system or by classic confinement snaps
[13:55] <NicolinoCuralli> ok: i understand it
[13:55] <pedronis> and yes classic is quite the overloaded term these days
[13:55] <jdstrand> :)
[13:57] <pedronis> we should usually add a 2nd word there I suppose as I did here, unless the context is super clear
[13:57] <jdstrand> all those terms try to communicate something in the same mental area ('old world usage patterns) but it gets a bit difficult to talk about the different technologies for all the different angles
[13:57] <jdstrand> pedronis: indeed
[14:01] <pedronis> jdstrand: niemeyer: maybe we should have a "The meanings of classic" forum post
[14:01] <ogra_> lol
[14:01] <niemeyer> pedronis: That's a good idea actually
[14:01] <ogra_> that will end like a philosophical pamphlet :)
[14:02]  * ogra_ thinks we should rename one of the "classics" ... 
[14:02] <niemeyer> pedronis: Or perhaps "The meaning of classic"
[14:02] <niemeyer> pedronis: It means just one thing really.. we just use it in several different places
[14:12] <mup> PR snapd#3422 opened: tests: take into account staging snap-ids for snap-info <Created by fgimenez> <https://github.com/snapcore/snapd/pull/3422>
[14:32] <zyga> mvo: updated
[14:34] <zyga> mvo: I simplified it a bit
[14:34] <zyga> mvo: and the extra hashes will help in de-duplicating issues
[14:34] <zyga> mvo: we'll know measure hashes of both the current and the .dpkg-new (if any) files
[14:35] <zyga> mvo: so we can check which profile is really used, which will let us test my theory reliably
[14:36] <zyga> mvo: I'll start the 2.26 based branches and PRs now
[14:37] <mvo> zyga: thanks, if we squash the merge we can just cherry pick in 2.26 trivially
[14:38] <mvo> zyga: lets use md5sum, thats the same that dpkg is using, this way we can easily compare with the dpkg metadata instead of having to unpack it
[14:38] <mvo> zyga: for snapConfineProfileDigest()
[14:38] <mvo> zyga: let me comment in the pR
[14:40] <zyga> ahasure
[14:40] <zyga> one sec
[14:41] <zyga> smart idea btw :)
[14:41] <mvo> zyga: thanks, one tiny bikesheed naming suggestion then its good to go
[14:42] <zyga> mvo: sure, let me look
[14:42] <zyga> haha, I called it like that
[14:43] <zyga> but I reverted to not reflow the block
[14:43] <zyga> but no worries :)
[14:43] <mvo> zyga: hahaha
[14:43] <zyga> I like it better this way
[14:47] <zyga> mvo: tweaked
[14:51] <zyga_> mvo: re, +1 if you like it, I'll work on other branches
[14:52] <zyga_> mvo: when do you want to do 2.26.x today?
[14:55]  * Chipaca has returned from the dentist expedition
[14:55] <Chipaca> pedronis: "50 shades of classic"?
[14:56] <mvo> zyga_: I'm still keen to do a 2.26.x today
[14:57] <zyga_> excellent
[14:57] <mvo> fgimenez: I think we have a new core in edge, could you please run the ntested test again?
[14:57] <zyga_> so we do a .candidate today?
[14:57] <zyga_> or beta?
[14:57] <ogra_> hmm, can i force re-exec somehow to avoid ever ending up with the deb based setup ?
[14:57] <zyga_> mvo: btw, you said you would cherry pick it
[14:58] <ogra_> (like i can force no-reexec, just the other way around)
[14:58] <zyga_> mvo: I can do that too (but feel free, just want to know if I should open brnaches)
[14:58] <zyga_> ogra_: not today, no; we have a preference and we check for viability
[14:58] <zyga_> ogra_: look at cmd/cmd.go
[14:59] <ogra_> zyga_, i fear the docker stuff with snap-confine will only work if it is actually re-execed ...
[14:59] <mvo> zyga_: probably only beta
[14:59] <ogra_> which works by sheer luck because i force install the edge core now
[14:59] <fgimenez> mvo: sure on it thx
[14:59] <ogra_> (so i can be sure to be newer than the deb)
[14:59] <mvo> zyga_: cherry pick should be fine once its in master
[15:00] <ogra_> but if i wanted to use the stable core it might jump back and forth on container startup
[15:01] <ogra_> (depending if the deb sanpd is newer than the one in core or not)
[15:12] <pedronis> Chipaca: welcome back, we chatted in the standup about http status codes: https://forum.snapcraft.io/t/numeric-http-status-codes-vs-http-status-constants/860
[15:15] <ogra_> ha!
[15:15]  * ogra_ is using the snapcraft snap inside a docker container to build a snap 
[15:16] <ogra_> awesome :D
[15:17] <pedronis> pstolowski: btw when you have a bit of time could you finish the review/revote on snapd#3417 given what we discussed in the standup
[15:17] <mup> PR snapd#3417: httputil,store: extract retry code to httputil, reorg usages <Created by pedronis> <https://github.com/snapcore/snapd/pull/3417>
[15:17] <pstolowski> pedronis, i approved already a few minutes ago
[15:21] <mup> PR snapd#3422 closed: tests: take into account staging snap-ids for snap-info <Created by fgimenez> <Merged by pedronis> <https://github.com/snapcore/snapd/pull/3422>
[15:22] <pedronis> pstolowski: ah, thank you
[15:22] <pedronis> I missed the email
[15:22] <pedronis> saw it now
[15:23] <cwayne> ogra_: but was that docker container from the docker snap :p
[15:23] <pedronis> pstolowski: btw, related to it, I noticed we use the defaultRetryStrategy also for download, shouldn't we use a slightly longer overall timeout there?
[15:23] <ogra_> cwayne, hah, no, i dont think that will work
[15:24] <ogra_> cwayne, only the docker.io deb
[15:24] <cwayne> not with that attitude it won't!
[15:24] <ogra_> heh
[15:25] <ogra_> i need to break all confinement to make that work ... i doubt that can work with the docker snap wrapped around it
[15:25] <pstolowski> pedronis, hmm, fair point, although I'd say if we timeout it's at the beginning when download is initiated usually
[15:26] <ogra_> though i might .... in an insane moment ... try it :P
[15:26] <pedronis> pstolowski: yes, I don't think we should make 30 mins because some snaps are big
[15:26] <pedronis> just sayign it might need to be a bit more, some small multiple of the default one
[15:26] <cwayne> ogra_: that's the spirit :D
[15:38] <fgimenez> mvo: core-revert succeeded with latest edge core snap \o/
[15:39] <mvo> fgimenez: excellent!
[15:40] <fgimenez> mvo: http://paste.ubuntu.com/24738597/
[15:41] <mup> PR snapd#3421 closed: errtracker: include bits of snap-confine apparmor profile <Critical> <Created by zyga> <Merged by mvo5> <https://github.com/snapcore/snapd/pull/3421>
[15:42] <mvo> zyga_: looks like we need a backport of 3421 - it can not be cleany cherry picked, if you could do that, that would be great
[15:42] <zyga_> mvo: absolutely, on it
[15:42] <mup> PR snapcraft#1344 opened: git: don't use --remote for updating submodules <Created by Saviq> <https://github.com/snapcore/snapcraft/pull/1344>
[15:45] <zyga_> mvo: one for your consideration https://github.com/snapcore/snapd/pull/3423
[15:45] <mup> PR snapd#3423: errtracker: report if snapd did re-execute itself <Created by zyga> <https://github.com/snapcore/snapd/pull/3423>
[15:45] <mup> PR snapd#3423 opened: errtracker: report if snapd did re-execute itself <Created by zyga> <https://github.com/snapcore/snapd/pull/3423>
[15:49] <zyga_> mvo: the backport https://github.com/snapcore/snapd/pull/3424
[15:49] <mup> PR snapd#3424: errtracker: include bits of snap-confine apparmor profile (#3421) <Created by zyga> <https://github.com/snapcore/snapd/pull/3424>
[15:49] <mvo> zyga_: \o/
[15:49] <mup> PR snapd#3424 opened: errtracker: include bits of snap-confine apparmor profile (#3421) <Created by zyga> <https://github.com/snapcore/snapd/pull/3424>
[15:50] <mup> PR snapd#3424 closed: errtracker: include bits of snap-confine apparmor profile (#3421) <Created by zyga> <Merged by mvo5> <https://github.com/snapcore/snapd/pull/3424>
[15:50] <cachio> mvo, could you please retrigger the failed tests in the econnreset MP?
[15:50] <zyga_> cachio: I can
[15:50] <cachio> zyga_, tx
[15:52] <zyga_> done
[15:53] <cachio> zyga_, tx
[15:53] <cachio> zyga_, could you trigger this one too?
[15:53] <cachio> https://github.com/snapcore/snapd/pull/3405
[15:53] <mup> PR snapd#3405: tests: fix for upgrade test when it is repeated <Created by sergiocazzolato> <https://github.com/snapcore/snapd/pull/3405>
[15:55] <zyga_> done
[15:57] <cachio> zyga_, tx
[15:57] <zyga_> :)
[15:58] <zyga_> we need a 2nd review on  https://github.com/snapcore/snapd/pull/3423
[15:58] <mup> PR snapd#3423: errtracker: report if snapd did re-execute itself <Created by zyga> <https://github.com/snapcore/snapd/pull/3423>
[16:01] <pedronis> Chipaca: thanks, I was puzzled by the not-found change
[16:01] <morphis> zyga_: btw. did you talk with anyone at suse about the global /snap dir?
[16:02] <Chipaca> pedronis: blinkers on
[16:02] <Saviq> kyrofa, elopio, https://launchpad.net/ubuntu/zesty/+queue?queue_state=1&queue_text=snapcraft
[16:07] <cjwatson> sergiusens_: did you see the bit of my review about dropping the extra Conflicts: python3-click-cli?
[16:16] <sergiusens_> cjwatson: I saw the one from around 10hs ago and pushed a revert of that, haven't seen newer comments, let me check now
[16:17] <sergiusens_> cjwatson: yeah, I think I addressed your comment already, thanks for finding that slip
[16:18] <cjwatson> sergiusens: you did the PkTransactionPast one but not the Conflicts
[16:18] <mup> PR snapcraft#1345 opened: deltas: Improved message returned when delta is too big <Created by gsilvapt> <https://github.com/snapcore/snapcraft/pull/1345>
[16:19] <cjwatson> I made two relevant comments :)
[16:23] <sergiusens> oh, I didin't seem to notice that one
[16:23]  * sergiusens goes and reads again
[16:23] <zyga_> pedronis: replied to your comment
[16:25] <pedronis> zyga_: ah,  I woudl return yes and no,  and rename the field , I find the "" one a bit confusing
[16:26] <Facu> zyga_, you wrote the "Building the code locally" part in snapd's HACKING.md file... question: is there a way to compile it and run it *from the project dir*, not installing it in the system?
[16:26] <Facu> zyga_, all I want to do is to hit staging environment with snapd for some tests
[16:27] <Facu> (is there an easier way than re-building?)
[16:38] <cjwatson> sergiusens: thanks, approved now.  it should be landable with bileto if you still remember how to use that :-)
[16:38] <Chipaca> Facu: o/
[16:38] <Chipaca> Facu: GOPATH=~/canonical/snappy go build -v -i -o /tmp/srv github.com/snapcore/snapd/cmd/snapd && sudo ~/bin/run-snapd-srv
[16:38] <Chipaca> Facu: if all you're wanting to do is hit the store, that ^ does it
[16:38] <sergiusens> cjwatson: I was lucky to escape it early, but I was going to ask slangasek if it was still functional and the preferred mechanism still
[16:38] <sergiusens> thanks for looking
[16:39] <sergiusens> I'll work on getting it in
[16:39] <Chipaca> Facu: unless what you're tweaking is the client, in which case you can just go run it
[16:39] <cjwatson> yeah, it's how the last version was landed at any rate
[16:39] <cjwatson> I don't know if quadruple landings are a thing though, possibly not, you might need multiple MPs
[16:39] <slangasek> bileto is in maintenance mode but still functional
[16:40] <slangasek> quadruple landings should not be a thing, the multiple-series landings have always caused a bit of friction against the archive and I think we've done away with them post-zesty
[16:40] <cjwatson> righto
[16:44] <cachio> zyga_, do you know if it is possible to see the all the ci results for a specific branch?
[16:44] <cachio> in travis
[16:49] <zyga_> cachio: yes, always if it was tested
[16:49] <zyga_> cachio: which branch are you thinking about?
[16:49] <zyga_> pedronis: sure, I will make the changes, thank you for clarifying that
[16:49] <cachio> zyga_, sergiocazzolato:tests-fix-upgrade-basic
[16:49] <zyga_> Facu: yes, although there are many loopholes and complexity
[16:50] <zyga_> Facu: which branch do you want to try?
[16:50] <zyga_> Facu: depending on where the changes are you need to do more or less things
[16:50] <Facu> zyga_, just master, but wait, Chipaca is currently helping me (and teaching me some go stuff)
[16:51] <Chipaca> mbuahaha
[16:51] <zyga_> Facu: you can just switch to edge :)
[16:51] <zyga_> Facu: use the edge channel luke ;-)
[16:51] <zyga_> Facu: sure I saw, I just reply in the backlog order
[16:52] <Facu> zyga_, snapd from edge (snap install snapd --channel=edge) uses staging?
[16:52] <zyga_> Facu: ah, staging store?
[16:52] <pedronis> Facu: no
[16:52] <zyga_> nope
[16:52] <Facu> ok
[16:56] <sergiusens> slangasek: cjwatson: there seems to be no series registered for click to land in older releases, should I get this in artful with bileto and go the traditional debsource route for the SRU?
[17:03] <cjwatson> sergiusens: that's fine by me
[17:07] <Facu> zyga_, for the record, all I needed to do to run snapd against staging (plus some debugging I wanted) is
[17:07] <Facu> sudo systemctl stop snapd.service snapd.socket
[17:07] <Facu> sudo SNAP_TESTING=1 SNAPD_DEBUG=1 SNAPD_DEBUG_HTTP=7 SNAPPY_USE_STAGING_STORE=1 /usr/lib/snapd/snapd
[17:13] <zyga_> :-)
[17:13] <zyga_> nice
[17:14] <Facu> zyga_, ...which doesn't really work, because you can not cross assertions from prod and staging, I'm currently learning
[17:15] <Facu> zyga_, so or you start a VM that never used prod before, or you clean up somehow what snapd has stored
[17:15] <zyga_> Facu: ah, indeed
[17:15] <zyga_> Facu: test builds use different key
[17:16] <Facu> zyga_, I'm not using a test build, but the system's snapd
[17:17] <zyga_> right
[17:20] <zyga_> pedronis: updated
[17:21] <zyga_> pedronis: I'll land it and cherry pick for 2.26 when it goes green
[17:21]  * zyga_ works on snap-confine patches now
[17:31] <zyga_> mvo: backport https://github.com/snapcore/snapd/pull/3425
[17:31] <mup> PR snapd#3425: errtracker: report if snapd did re-execute itself <Created by zyga> <https://github.com/snapcore/snapd/pull/3425>
[17:32] <mup> PR snapd#3425 opened: errtracker: report if snapd did re-execute itself <Created by zyga> <https://github.com/snapcore/snapd/pull/3425>
[17:46] <mup> PR snapd#3425 closed: errtracker: report if snapd did re-execute itself <Created by zyga> <Merged by mvo5> <https://github.com/snapcore/snapd/pull/3425>
[17:49] <jjohansen> zyga_: no, that fix is not in any ubuntu kernel, it was done yesterday
[17:49] <zyga_> jjohansen: aha, interesting!
[17:49] <zyga_> jjohansen: thank you!
[17:50] <zyga_> jjohansen: btw, not sure if you noticed but we ran into an apparmor oops, just trying to recall who saw this :/
[17:50] <zyga_> ogra_: was that you/
[17:52] <mvo> zyga_: me
[17:54] <jjohansen> mvo: bug?, oops trace back? anything for me to look at?
[17:54] <zyga_> mvo: ah, thank you :)
[17:56] <niemeyer> Folks, I got a heads up from Linode that they've blocked our use of the API due to volume
[17:56] <niemeyer> Expect broken tests until I sort that out
[17:58] <zyga_> niemeyer: thanks for the heads up!
[17:58] <mvo> jjohansen: let me search, I only have the bits that I gave to zyga_ the other day, it was actually me using seccomp inappropriately so it may well be not a real apparmor bug
[17:58] <zyga_> niemeyer: volume of calls we make?
[17:58] <zyga_> ah
[17:58] <mvo> jjohansen: http://paste.ubuntu.com/24724666/ this is what I have, I think prctl(PR_SET_NO_NEW_PRIVS, 1, 0, 0, 0)  = 0 was actually the problem, i.e. I was incorrectly using that
[17:58] <zyga_> jjohansen: I recall now
[17:58] <niemeyer> zyga_: Yep, spread
[17:59] <zyga_> jjohansen: aa change onexec would fail and we'd oops
[17:59] <zyga_> jjohansen: but looks like a bug in the kernel
[18:00] <jjohansen> mvo, zyga_: ah yep the kernel I built yesterday should fix that  http://people.canonical.com/~jj/linux+jj/
[18:00] <mvo> jjohansen: great, thank you - that is all I have encountered
[18:22] <niemeyer> I guess I'll need to write a new spread backend to a cloud where 80 machines isn't much.. :(
[18:27] <cachio> niemeyer, are you gonna try with aws? or it is much expensive?
[18:27] <niemeyer> It is more expensive and has other associated issues in terms of book keeping
[18:28] <niemeyer> But everything may be workarounded
[18:28] <cachio> rackspace is another good alternative
[18:28] <niemeyer> On Linode we get a machine descent enough to run our tests for a whole month for 10 bucks
[18:28] <cachio> not sure about the cost
[18:29] <niemeyer> If I have to leave a non-mainstream provider, I'm not going anywhere else but AWS or GCE
[18:29] <zyga_> niemeyer: heh, that's pretty depressing
[18:29] <zyga_> niemeyer: did we abuse the API somehow?
[18:29] <niemeyer> Where I'm sure we'll be small bees
[18:29] <niemeyer> zyga_: We did in their opinion
[18:29] <zyga_> niemeyer: by creating/destroing machines?
[18:30] <niemeyer> zyga_: Yep
[18:30] <niemeyer> zyga_: It's about volume
[18:30] <niemeyer> zyga_: They're being dumb.. it was fine when we had 20 machines
[18:30] <zyga_> niemeyer: interesting, I wonder if that has greater impact than just running a vm 24/7
[18:30] <zyga_> niemeyer: one more option, standardize on qemu
[18:30] <niemeyer> zyga_: Now that we have 80, it's 4 times as much
[18:30] <niemeyer> From the same account
[18:30] <zyga_> niemeyer: get a few huge VMs
[18:30] <zyga_> niemeyer: and let them proxy our spread protocol
[18:30] <zyga_> niemeyer: so instead of hitting linode
[18:31] <zyga_> niemeyer: we'll hit any host with a qemu helper
[18:31] <zyga_> niemeyer: you already have the backend written
[18:31] <zyga_> niemeyer: and it's just the proxying that needs doing
[18:31] <zyga_> +/- details ;-)
[18:31] <niemeyer> zyga_: Heh.. it's just code that needs doing, yes.. like anything else
[18:31] <niemeyer> zyga_: Part of the goal here is for the system to sustain itself without maintenance
[18:32] <niemeyer> Almost entirely, at least
[18:32] <zyga_> niemeyer: or do what I did, I bought a 24 thread workstation with 48 GB of ram for a few 100 on ebay (not for me)
[18:32] <niemeyer> Nobody babysits our machines on a daily basis
[18:32] <zyga_> niemeyer: there is no cloud, it's just someone's computer :D
[18:32] <niemeyer> zyga_: This needs _maintenance_
[18:32] <zyga_> niemeyer: yes that's true :/
[18:32] <niemeyer> zyga_: I get news about broken hardware on those 80 machines all the time
[18:32] <zyga_> broken hardware?!
[18:32] <niemeyer> It's a non-issue for us
[18:32] <niemeyer> Yep
[18:33] <zyga_> as in virtually broken virtual hardware or really broken linode metal?
[18:33] <niemeyer> zyga_: Hardware breaks surprisingly often when you get to high numbers
[18:33] <zyga_> wow, linode needs to dust their cloud more often
[18:33] <cachio> heheh
[18:33] <zyga_> or use less soap and water :D
[18:33] <niemeyer> zyga_: Again, this is *normal*
[18:34] <zyga_> interesting, I never ran anything of this size
[18:34] <zyga_> and actually this is tiny size by cloud standards
[18:34] <niemeyer> zyga_: Think about percentages
[18:34] <zyga_> I can recall one email I got from my provider when they just told me that they migrated my VM from data centre to data centre for maintenance, without stopping the VM
[18:35] <zyga_> but since the performance was slower for that minute it took, they notified me
[18:35] <zyga_> (and two months in advance AFAIR)
[18:35] <zyga_> anyway
[18:35] <zyga_> sad linode day :-(
[18:35]  * zyga_ hugs niemeyer for the amazing spread that lets us do stuff we couldn't dream of before
[18:35] <niemeyer> Yeah.. I'm not super optimistic]
[18:35] <zyga_> remember when every test was written in go and mocking?
[18:36] <niemeyer> Yeah, I predict a weekend lost here
[18:36] <zyga_> and we had x50 fewer of those
[18:36] <zyga_> mvo: ^^ you may need to merge manually for the release
[18:37] <zyga_> niemeyer: what's the chance of using canoincal cloud for that?
[18:38] <zyga_> niemeyer: could we get ~100 machines that just work?
[18:38] <niemeyer> zyga_: Pretty low.. we want completely open firewalls
[18:38] <zyga_> ah, indeed
[18:38] <zyga_> well
[18:38] <zyga_> maybe google or azure :)
[18:38] <zyga_> I don't know their pricing though
[18:39] <zyga_> but yeah, more coding on the glue
[18:39] <zyga_> maybe write a blog post
[18:39] <zyga_> about linode being terrible and ugly and unfair
[18:39] <zyga_> and snapd needing a better cloud provider
[18:39] <zyga_> maybe someone will notice
[18:39] <zyga_> (we can help spread the news)
[18:40] <cachio> I used to create up to 1000 machines for performance test in aws
[18:40] <zyga_> niemeyer: out of curiosity, what is the limit (numeric) that we exceeded?
[18:40] <cachio> and I never had an internet problem or any other kind of problem
[18:40] <genii> performance test/botnet
[18:40]  * genii ducks
[18:41] <niemeyer> zyga_: None.. "you seem to be creating problems so we blocked you"
[18:42] <niemeyer> cachio: To be fair, AWS doesn't even let us create 1000 machines in general..
[18:42] <niemeyer> cachio: We had to get a special permit to do that when we needed that once
[18:42] <Chipaca> niemeyer: have you looked at ovh at all?
[18:42] <cachio> niemeyer, well, I asked for that
[18:42] <Chipaca> they were slightly mean but i think we worked through it
[18:43] <cachio> by defaul there is a limit abou 100 or something like that
[18:45] <cachio> the main problem is that I was expensive
[18:46] <cachio> it was expensive
[18:52] <cachio> 80 instances 8 hours/day 2vcpu 4GB ram = $917 monthly
[18:54]  * zyga_ dreams about those ~200 core workstations
[18:59] <niemeyer> cachio: Yeah, so it's not quite fair to say it was painless :)
[19:05] <niemeyer> Ban lifted..
[19:05] <niemeyer> Still discussing details, but it should be unblocked righ tnow
[19:13] <Chipaca> ogra_: http://biosrhythm.com/?page_id=1453
[19:15]  * zyga_ has to break now
[19:17] <mup> PR snapd#3423 closed: errtracker: report if snapd did re-execute itself <Created by zyga> <Merged by mvo5> <https://github.com/snapcore/snapd/pull/3423>
[19:18] <mup> PR snapd#3426 opened: release: 2.26.4 <Created by mvo5> <https://github.com/snapcore/snapd/pull/3426>
[19:39] <jdstrand> niemeyer: oh my! (blocked our use cause of volume). godspeed
[19:52] <niemeyer> jdstrand: Yeah, apparently the API is not very used there..
[19:53] <niemeyer> We'll sort it out..
[19:53] <niemeyer> Found someone more accessible and happy to discuss the use case
[20:58] <azubieta> hello, I'm having some issues on making an snap that uses kf5-frameworks as a part can someone help me?
[23:16] <Hilikus> hello
[23:17] <Hilikus> how can i enable a local terminal in my raspberry pi. i want to run an XServer in the hdmi port but by default all ubuntu core lets me do is ssh remotely, where i get permission problems trying to open the display
[23:31] <nacc> Hilikus: not sure i understand what you mean by 'local' terminal? I'm guessing that by default there is no x server running, or it's confined possibly?
[23:32] <Hilikus> nacc: I installed a chromium package that comes with an X server, but because ubuntu core only lets me connect via ssh, i have the extra problems of authenticating. if i can connect to a local tty instead of via ssh then maybe i'll have better luck
[23:32] <Hilikus> basically, i don'
[23:33] <nacc> Hilikus: 'extra problems of authenticating'? you mean you can't ssh in?
[23:33] <Hilikus> 't want to deal with the remote XServer issues
[23:33] <Hilikus> no, i can ssh just fine
[23:33] <Hilikus> but i can't forward the raspberry X session to my desktop
[23:34] <nacc> Hilikus: i assume you mean you did `ssh -X` ?
[23:34] <Hilikus> yes
[23:35] <Hilikus> but even if that worked, i need to be able to show the X session locally, via the HDMI of the raspberry + with a local keyboard and mouse, that's why i want to enable a local terminal. every other server i've used in my life has local terminals but ubuntu core only has this message (without prompt) saying to ssh
[23:37] <nacc> Hilikus: i'm fairly sure by default core doesn't have any ttys (or gettys) running
[23:37] <Hilikus> i think so too. but do you know how to enable one? is there something i could install?
[23:38] <Hilikus> or do you have another idea how to get a local graphical session with a browser? it doesn't have to be XServer, mir o wayland would be ok. i just need a browser in a local session
[23:39] <nacc> Hilikus: I don't know, sorry
[23:39] <Hilikus> ok. thank you anyway
[23:41] <nacc> Hilikus: once you ssh in, do you see any X server running? `ps aux | grep X`
[23:42] <Hilikus> no
[23:43] <nacc> Hilikus: what snap did you install (what name)?
[23:43] <Hilikus> https://code.launchpad.net/~osomon/+snap/chromium-snap-beta
[23:45] <nacc> Hilikus: it looks like there are a lot of interfaces it depends on, are they all connected?
[23:45] <nacc> Hilikus: something like `snap interfaces`
[23:48] <Hilikus> mm actually no
[23:48] <nacc> Hilikus: this is a gray area for me -- i'm not sure running a full desktop is really the target, but it might be possible (I have no idea actually)
[23:48] <Hilikus> i didn;'t think iof that
[23:48] <nacc> Hilikus: you might need to connect the various plugs if they weren't connected, so that the snap can run
[23:48] <Hilikus> i don't want a full desktop. this is for a kind of kiosk, i do still need a graphical interface
[23:49] <nacc> Hilikus: something like `snap connect <snap>:<plug>`
[23:49] <nacc> Hilikus: oh ok, based upon the marketing on the core page (digital signage), that certainly seems likeit should be possible, but I don't know any of the details :/
[23:50] <nacc> Hilikus: actual snappy folks should be around at some point, though :) you can ask again -- or maybe post on the forum?
[23:51] <Hilikus> ok, then for sure this won't work. your points made me realize this package wants to plug to unity7, not only X, so this package is for a hybrid system
[23:52] <nacc> Hilikus: yeah, i would expect chromium to be for the full fledged browser (meaning desktop integrated)
[23:52] <nacc> Hilikus: for kiosk, you want to basically write your own snap that is just your kiosk'd application (I think)
[23:52] <nacc> Hilikus: or did you mean your kiosk'd appliation is on a website?
[23:53] <Hilikus> yes, on a website
[23:54] <nacc> Hilikus: ah ok -- I don't immediately see any standalone browsers via `snap find`, but it might be worth looking
[23:59] <Hilikus> ive been looking for a week now with no luck :(