[00:16] <mwhudson> er spread just says this to me:
[00:16] <mwhudson> error: invalid project name: ""
[00:18] <mwhudson> oh is this because my gopath is not in $HOME
[00:22] <mwhudson> yes
[00:22] <mwhudson> grumble
[01:22] <sergiusens> elopio please check my last comment on snapcraft#1552
[01:22] <mup> PR snapcraft#1552: tests: replace the first batch of demo tests with snapd integration tests <Created by elopio> <https://github.com/snapcore/snapcraft/pull/1552>
[01:37] <mup> PR snapcraft#1559 opened: static: fix flake8 errors in setup.py <Created by sergiusens> <https://github.com/snapcore/snapcraft/pull/1559>
[01:38] <sergiusens> elopio or kyrofa easy one ^
[02:16] <mup> PR snapcraft#1560 opened: ci: use travis conditionals <Created by elopio> <https://github.com/snapcore/snapcraft/pull/1560>
[04:23] <mup> PR snapcraft#1561 opened: rust plugin: record the Cargo.lock file <Created by elopio> <https://github.com/snapcore/snapcraft/pull/1561>
[04:24] <kyrofa> elopio, are you still around?
[04:26] <kyrofa> Nah, nevermind, it's too late
[04:56] <mup> PR snapcraft#1562 opened: Record rust versions <Created by elopio> <https://github.com/snapcore/snapcraft/pull/1562>
[05:35] <elopio> kyrofa: I'm here. But yes, I'll EOD, I'll continue meditating about your PR tomorrow :)
[05:35] <mup> PR snapcraft#1563 opened: tests: simplify a little the data in nodejs unit tests <Created by elopio> <https://github.com/snapcore/snapcraft/pull/1563>
[06:10] <zyga-ubuntu> brr
[06:22] <mup> PR snapd#3950 closed: cmd/snap-repair: prefer leaking unmanaged fds on test failure over closing random ones  <Created by pedronis> <Merged by mvo5> <https://github.com/snapcore/snapd/pull/3950>
[06:46] <mup> PR snapd#3951 opened:  snap: add new `snap fold` and use in tests  <Created by mvo5> <https://github.com/snapcore/snapd/pull/3951>
[07:07] <mup> PR snapd#3949 closed: cmd/snap-repair: skip disabled repairs <Created by pedronis> <Merged by pedronis> <https://github.com/snapcore/snapd/pull/3949>
[07:17]  * zyga-ubuntu wonders if the rain will stop
[07:18] <zyga-ubuntu> or will it just switch to snow
[09:22] <mup> PR snapd#3939 closed: interfaces: add Connection type <Created by stolowski> <Merged by stolowski> <https://github.com/snapcore/snapd/pull/3939>
[09:32]  * zyga-ubuntu -> cold :/
[09:37]  * ogra_ hands zyga-ubuntu a campfire
[09:58]  * Chipaca helps zyga-ubuntu warm up by running the test suite on all his lab machines
[09:58]  * Chipaca is lying
[09:58]  * Chipaca always lies, even now
[10:01] <Chipaca> huh, just got a unit test dying because of my webcam
[10:01]  * Chipaca squints
[10:05] <Chipaca> looks like i'm missing a .h
[10:06]  * Chipaca hugs “apt build-dep ./”
[10:10]  * zyga-ubuntu fights with nfs
[10:12] <pedronis> seems we have again linode -> github issues
[10:13]  * Chipaca ponders nfs-aware snapds
[10:14] <Chipaca> e.g. grab snaps off other snapds, query other snapds acking
[10:14] <Chipaca> (fear not, only pondering this because waiting for unit tests)
[10:20]  * zyga-ubuntu has small progress :)
[10:36] <Son_Goku> zyga-ubuntu: I wonder if we could have a box set up running the latest Fedora to run through the latest snapd with setroubleshoot installed and active, and collect all the denials
[10:36] <Son_Goku> from the spread test
[10:38] <Son_Goku> setroubleshoot has an API, so you could use that to collect all the denials related to snappy policy domain
[10:44] <zyga-ubuntu> mmm
[10:45] <zyga-ubuntu> Son_Goku: we could just adjust the spread test run for fedora and somehow collect the results
[10:45] <zyga-ubuntu> Son_Goku: spread has a "residue" system where it could collect a file from the tested system
[10:45] <Son_Goku> is setroubleshoot-server installed and running when the fedora vm is booted?
[10:45] <Son_Goku> without setroubleshoot, you just get shitty AVC denial messages like you do with AppArmor
[10:46] <zyga-ubuntu> Son_Goku: we could bake it into the image
[10:47] <Son_Goku> then the next question is, how do we get the data
[10:47] <Son_Goku> I know it spews messages into the journal
[10:47] <zyga-ubuntu> we'd run it manually on linode and collect it via the resudue
[10:47] <Son_Goku> but I don't know if there's another location too
[10:47] <zyga-ubuntu> we'd have to script things so that there's a file to collect
[10:47] <Son_Goku> right
[10:48] <pedronis> mvo: so good news, preliminary testing again staging worked as expected,  otoh I think we should disable the snap-repair timers during spread tests or it might get confusing when we start having some?
[10:49] <mvo> pedronis: agreed
[10:49] <mvo> pedronis: having said that, we could do it when we get the first repair
[10:50] <pedronis> as you prefer
[11:07] <zyga-ubuntu> ok, I have working NFS support in snapd
[11:07] <zyga-ubuntu> let's propose it piece by piece and let's work on a spread test
[11:08] <mup> PR snapd#3952 opened: cmd: update "make hack" <Created by zyga> <https://github.com/snapcore/snapd/pull/3952>
[11:08] <zyga-ubuntu> Chipaca: ^ trivial
[11:33] <mup> PR snapcraft#1563 closed: tests: simplify a little the data in nodejs unit tests <Created by elopio> <Merged by sergiusens> <https://github.com/snapcore/snapcraft/pull/1563>
[11:39] <mup> PR snapcraft#1559 closed: static: fix flake8 errors in setup.py <Created by sergiusens> <Merged by sergiusens> <https://github.com/snapcore/snapcraft/pull/1559>
[11:51] <pedronis> mvo: sample runs (with staging):  http://pastebin.ubuntu.com/25585711/
[12:05] <ppisati> $ snapcraft cleanbuild --output
[12:05] <ppisati> Error: no such option: --output
[12:06] <ppisati> $ snapcraft cleanbuild
[12:06] <ppisati> [cpu stuck at 100% but no output, and apparently nothing happens]
[12:06] <ppisati> sergiusens: ^
[12:11] <ppisati> elopio: ^
[12:11] <ppisati> kyrofa: ^
[12:12] <popey> jdstrand: i just uploaded an app (pulsemixer) which got stuck in review because it has no desktop file (I use the x11 interface). It is a CLI app - no GUI. But the x11 interface is required for pulse to work!
[12:12] <popey> (known bug?)
[12:15] <jdstrand> popey: it isn't a bug in the tools. I can add an exception for it
[12:15] <zyga-ubuntu> jdstrand: hey
[12:15] <zyga-ubuntu> jdstrand: I have a POC for nfs support, do you want to have a quick look?
[12:16] <jdstrand> hey zyga-ubuntu
[12:16] <popey> jdstrand: I mean a bug in the pulseaudio interface..?
[12:17] <pedronis> mvo: #3934  spreads runs seems to be plagued by networking issues :/
[12:17] <mup> PR #3934: snap-repair: implement `snap-repair {list,show}` <Created by mvo5> <https://github.com/snapcore/snapd/pull/3934>
[12:18] <mvo> pedronis: yeah, I noticed as well, super annoying, so close
[12:19] <jdstrand> popey: no. pulse uses x11 window properties to find the server. something that is doing that should declare the x11 interface
[12:19] <popey> ahh
[12:19] <jdstrand> (ie, we don't want to add X access to the pulseaudio interface)
[12:22] <jdstrand> zyga-ubuntu: where is it?
[12:23] <zyga-ubuntu> jdstrand: I just pushed it to feature/nfs-support branch
[12:23] <zyga-ubuntu> jdstrand: I tested it successfully on my artful system, I'm working on spread test
[12:24] <zyga-ubuntu> jdstrand: have a look at each patch, I tried to make it logically follow one another
[12:31] <zyga-ubuntu> jdstrand: I was thinking about also parsing etc/fstab and looking for NFS there
[12:31] <zyga-ubuntu> jdstrand: and enabling the workaround if _either_ NFS is actually mounted or fstab says it may be mounted
[12:38] <jdstrand> zyga-ubuntu: I made a couple of comments. I think the direction is fine and who it is implemented makes sense
[12:40] <zyga-ubuntu> jdstrand: thank you, looking
[12:40] <jdstrand> zyga-ubuntu: I was initially thinking only nfs home. I guess its possible some of the other policy might allow access to nfs directories depending on the setup, so I guess checking the mount table for anything is ok
[12:41] <jdstrand> maybe it makes sense to start with only home, and expand if people are affected...
[12:41] <jdstrand> probably need more people to discuss, but that can happen in PR review
[12:42] <mup> PR snapcraft#1560 closed: ci: use travis conditionals <Created by elopio> <Merged by sergiusens> <https://github.com/snapcore/snapcraft/pull/1560>
[12:42] <zyga-ubuntu> jdstrand: anything that is a subdir of home is a good test as this will let us test this easily and will be constrained how you explained
[12:43] <zyga-ubuntu> jdstrand: as for reexec, that will work automatically thanks to the way the apparmor imports are handled
[12:44] <zyga-ubuntu> jdstrand: I think it makes sense to extend filesystem type to handle samba but I haven't tested that so I'd like to do it as a separate step
[12:44] <zyga-ubuntu> jdstrand: thank you for the quick look, I think this will work fine shortly
[12:45] <sergiusens> ppisati I don't think we ever supported --output in `cleanbuild`
[12:45] <jdstrand> zyga-ubuntu: it isn't the imports I was worried about, it was loading the older snap-confine policy when we are reexecing
[12:46]  * sergiusens waves hello
[12:46] <jdstrand> zyga-ubuntu: re samba> sounds fine
[12:48] <zyga-ubuntu> jdstrand: older policy?
[12:49] <zyga-ubuntu> jdstrand: when we are re-execing the newer snapd will run snap-confine from core and we already have support for managing that dedicated profile
[12:49] <zyga-ubuntu> jdstrand: and that profile is a derivative of the profile stored in the core snap
[12:49] <jdstrand> zyga-ubuntu: there is /etc/apparmor.d/usr.lib.snapd.snap-confine.real, but we also end up with usr.lib.snapd.snap-confine.real.rev (or something)
[12:49] <zyga-ubuntu> jdstrand: so it will source the system-wide policy extension for snap-confine
[12:49] <zyga-ubuntu> jdstrand: right but see above, those all source one place
[12:50] <zyga-ubuntu> jdstrand: and we already have setup code that ensures those are reloaded
[12:50] <jdstrand> it isn't /var/lib/snapd/apparmor/snap-confine.d I am worried about
[12:50] <jdstrand> that directory is fine (indeed, I suggested it)
[12:50] <jdstrand> it is that you are hardcoding to apparmor_parser -r /etc/apparmor.d/usr.lib.snapd.snap-confine.real, not accoutning for usr.lib.snapd.snap-confine.real.rev
[12:51] <zyga-ubuntu> right
[12:51] <zyga-ubuntu> I assume by .rev you mean the core revision
[12:51] <jdstrand> yes
[12:51] <jdstrand> I forgot what it looked like for real
[12:51] <zyga-ubuntu> I was trying to explain that IMO this is not needed because the code that manages the per-rev profile is already doing everything we need
[12:52] <jdstrand> snap.core.2898.usr.lib.snapd.snap-confine
[12:52] <jdstrand> that's an example ^
[12:52] <zyga-ubuntu> as it will derive the contents of the .rev file from the profile stored in the core snap
[12:52] <zyga-ubuntu> and then load it
[12:52] <zyga-ubuntu> is there more that we need to manage for per-revision profiles?
[12:52] <cachio> niemeyer, mvo i am taking my son to the hospital to make a bone scan
[12:53] <mvo> cachio: uh, good luck!
[12:53] <cachio> mvo, TX, should be back soon
[12:54] <nessita> sergiusens, hi! unsure if you saw but the bug from yesterday was fix released last night
[12:54] <pedronis> mvo: aftering having played a bit with "snap repair", do you think it would make sense to print as field the mtime of the log file do have an idea one things have run?
[12:54] <pedronis> s/do have/to have/
[12:54] <zyga-ubuntu> jdstrand: does my explanation make sense?
[12:54] <niemeyer> Thanks for the note, and good luck there!
[12:54] <jdstrand> zyga-ubuntu: looking at the above profile, and reminding myself what shows up in aa-status, I think there may not be a bug. but it seems weird that that code is only loading .real. doesn't that mean we might load .real when we don't have to?
[12:55] <zyga-ubuntu> jdstrand: aha, interesting question; yes, when we are rexeced we don't need to reload the .real profile
[12:55] <pedronis> mvo:  mmh, s/idea one/idea when/
[12:55] <zyga-ubuntu> jdstrand: just write the policy and wait for the setup code to reload the right file
[12:55] <jdstrand> zyga-ubuntu: like, when we are execing the snapd in /snap/core/..., it is going to reload .real
[12:56] <zyga-ubuntu> jdstrand: right, I agree
[12:56] <zyga-ubuntu> oh, standup time
[12:56] <zyga-ubuntu> brb
[12:58] <mvo> pedronis: yeah, I think this is very nice
[12:58] <mvo> pedronis: we should also include it in the log in case of fs corruption
[12:58] <mvo> pedronis: I can add that after the meeting
[12:59] <pedronis> thanks
[12:59] <pedronis> I'll also rework my branch with some details, so it's ready when review/spread tests are back to being helpful
[13:05] <ppisati> sergiusens: ok, but do you happen to know why my 'snapcraft cleanbuild' doesn't do anything? is it normal that i don't get any output?
[13:05] <sergiusens> ppisati that is not normal, no
[13:05] <sergiusens> ppisati can you add --debug?
[13:06] <sergiusens> ppisati also check ps and see that we are not stuck on a lxc call
[13:06] <sergiusens> nessita thank you
[13:06] <davidcalle> ogra_: just so you know, we now have this page https://developer.ubuntu.com/target-platforms/boards giving an overview of available images/boards, it's not core exclusive, so if you have others in mind I don't know of (only requirement is to have flashing steps somewhere on the web and a somewhat official image working), let me know
[13:08] <davidcalle> (also, you can pass the search term through the url, eg https://developer.ubuntu.com/target-platforms/boards#arm64 to have a filtered view)
[13:10] <ppisati> sergiusens: root     17092  0.0  0.0 160904  1396 ?        Ssl  14:00   0:00 /usr/bin/lxcfs /var/lib/lxcfs/
[13:10] <ppisati> root     18245  0.0  0.0  95016  5472 ?        Ss   14:02   0:00 [lxc monitor] /var/lib/lxd/containers first
[13:10] <ppisati> flag     19422 97.2  0.5 346624 89224 pts/19   R+   15:07   2:21 /usr/bin/python3 /usr/bin/snapcraft -d cleanbuild --debug
[13:10] <ppisati> flag     19634  0.0  0.0 130284   980 pts/20   R+   15:09   0:00 grep --color=auto -e lxc -e snapcr
[13:10] <ppisati> ops
[13:10] <ppisati> sorry
[13:10] <ppisati> sergiusens: http://pastebin.ubuntu.com/25586157/
[13:20] <sergiusens> ppisati strange that no name for the assigned container is being shown. Can you `lxc list` and figure it out and then `lxc exec <container> -- ps -auxww`
[13:26] <ogra_> davidcalle, i have five "community" images too ... not sure if we want to mention them (for nanopi, nanopi-air, beaglebone, hummingboard and sabrelite)
[13:27] <ogra_> (and three more boards for "community" images in the pipe)
[13:27] <ppisati> sergiusens: http://pastebin.ubuntu.com/25586244/
[13:31] <sergiusens> ppisati there is no snapcraft running in that one :-/
[13:32] <sergiusens> ppisati and it is not named after the petname triplet we use for those containers
[13:32] <sergiusens> ppisati oh wait, have you ever used cleanbuild to build kernels?
[13:32] <matteo> ppisati: watch your step!
[13:32] <sergiusens> ppisati we tar up everything in the source and only then create a container
[13:33] <sergiusens> and push that tarball into the container
[13:33] <sergiusens> if you want a somewhat more streamlined version of this try to do `SNAPCRAFT_CONTAINER_BUILDS=1 snapcraft`
[13:39] <ppisati> sergiusens: never used cleanbuild, first time
[13:40]  * ogra_ has never seen cleanbuild fail before :)
[13:40] <davidcalle> ogra_: it would be a good idea to give them more visibility, where can I learn more about these images? Do you have repos for each or maybe a wiki page?
[13:41] <davidcalle> ogra_: (I've noticed your forum posts on images, but wondering about other places you might have)
[13:42] <ogra_> davidcalle, a wiki is still on my TOD (just didnt get to that yet) ... images are in my people.u.c home at http://people.canonical.com/~ogra/snappy/all-snaps/daily/current/ ... gadgt source is at my GH account, kernels are scattered around between GH and launchpad
[13:42] <ogra_> i need to do some cleanu for the kernels ... and eventually give ppisati a patch for the allwinner support
[13:43] <ogra_> davidcalle, i'll try to get to the wikipage over the NYC week and then we can look how/if we link it from some official place
[13:43] <ogra_> are you at the rally ?
[13:44] <davidcalle> ogra_: I think the boards page can be the right place for these (with an official/community images distinction)
[13:44] <davidcalle> ogra_: no, I won't be in NYC
[13:46] <sergiusens> ogra_ ppisati so my suspicion is that it feels stuck given the size of the source, we need a nice spinner to show it is not stuck, if you go the envvar path and using a local lxd remote (default), then you get nice bind mounts instead which would speed things up and not pollute your environment and if it is the first run it would feel as close as a
[13:46] <sergiusens>  cleanbuild can feel
[13:50] <pedronis> niemeyer: do you have time for a quick chat today? I'm writing some notes for a topic for next week and I would like a sanity check that I'm not thinking not useful stuff
[13:50] <niemeyer> pedronis: Definitely
[13:51] <pedronis> niemeyer: now? or in a bit?
[13:51] <niemeyer> pedronis: My calendar is quite open today
[13:51] <niemeyer> pedronis: Now works
[13:51] <pedronis> let's do now
[13:51] <pedronis> then
[13:51] <niemeyer> pedronis: Ok, coming back to the standup
[14:00] <ppisati> sergiusens: ah, if i wait long enough, the builds finally start - sorry for the noise... :|
[14:01] <sergiusens> no worries, ppisati if you want, feel free to log a bug that it feels stuck and we will sort it with proper feedback
[14:05]  * zyga-ubuntu checks 14.04 support 
[14:06] <zyga-ubuntu> a few more tweaks and unit tests and I'll propse NFS
[14:06] <bladernr> How does one get aliases defined in snapcraft.yaml to work?
[14:07] <bladernr> for example, if I have this: usb-only:
[14:07] <bladernr>     command: bin/usb-only
[14:07] <bladernr>     plugs: *standard
[14:07] <bladernr>     aliases: [usb-only]
[14:07] <pedronis> bladernr: that's a deprecated approach, next version of snapcraft will say as much
[14:08] <bladernr> OK, but I'm not developing on the enxt version of snapcraft, I'm using the snapcraft available today.
[14:08] <bladernr> e.g. 2.34
[14:09] <pedronis> bladernr: that's not relevant, since snapd from 2.26 will ignore that field
[14:09] <pedronis> bladernr: the new world is described here:  https://forum.snapcraft.io/t/improving-the-aliases-implementation/18
[14:09] <pedronis> bladernr: automatic aliases are something now only controlled through the store
[14:09] <pedronis> bladernr: you'll just need to make a forum request about the aliases you need
[14:10] <pedronis> bladernr: see the last session in that forum topic
[14:10] <pedronis> sorry
[14:10] <pedronis> section
[14:10] <bladernr> and what If I don't want to distribute via the ubuntu snap store?  For developpment purposes, I want to distribute the snap from a local store on my MAAS server.
[14:11] <bladernr> because uploading a snap to the ubuntu tore takes me upwards of 30 minutes each upload
[14:11] <bladernr> or 5 seconds locally
[14:11] <roadmr> bladernr: even with delta uploads?
[14:11] <pedronis> bladernr: what's a local store?
[14:11] <bladernr> roadmr - no idea what that is.
[14:11] <roadmr> bladernr: snapcraft should automagically upload just the delta between your last upload and the current one
[14:11] <mup> PR snapd#3953 opened: snap-confine: fix base snaps on core <Created by mvo5> <https://github.com/snapcore/snapd/pull/3953>
[14:12] <bladernr> pedronis, I want to serve snaps locally for my maas server (this is orthogonal to the alias question).  I just want to push a snap to my maas server and then on a node, snap install from my maas server
[14:12] <bladernr> I work around that nowy with a bunch of scp
[14:12] <pedronis> bladernr: well if it's supported local store it will still use the same mechanisms
[14:12] <roadmr> bladernr: you do that in a script presumably?
[14:12] <roadmr> you'd have to "it's possible for the user the setup manual aliases under their control:"
[14:12] <bladernr> sigh...
[14:12] <pedronis> there will be a store level decclaration
[14:13] <pedronis> coming from the main store
[14:13] <pedronis> or local (but that is not designed yet at all)
[14:13] <bladernr> Ok, it seems maybe my expectations are too high.
[14:14] <bladernr> Let me start from the beginning... I have a snap, I want ot put that snap on a node behind my MAAS.  What is the way to do that? Do I upload to the ubuntu sture and dhen download it locally?
[14:14] <bladernr> err... by download, I mean snap install blah on my node.
[14:15] <pedronis> we are working on proxy store that can be setup localy
[14:15] <pedronis> but yes for a while all snaps will go through the main store
[14:16] <bladernr> So in the future, assuming I start using snaps exclusively for my work, a local, standalone distribution point is a hard requirement.  I CAN NOT depend on some internet based snap store.
[14:16] <bladernr> will that proxy store replace the stuff that's documented now for local hosting but doesn't work anylonger?
[14:16] <pedronis> yes
[14:16] <bladernr> e.g. the stuff at the bottom of https://snapcraft.io/docs/core/store
[14:16] <bladernr> ahhh ok
[14:16] <bladernr> that will help then
[14:17] <pedronis> first version will do proxying/caching  and some local control
[14:17] <bladernr> unfortunately, most of my customers are in segregated environments that likely have zero internet connectivity.
[14:17] <pedronis> but it will grow more features over time
[14:17] <pedronis> bladernr: you probably want to talk to people driving that about your requirements
[14:19] <bladernr> pedronis, ahh thanks.
[14:23] <cachio> is it possible to create a branch on snapd with the same content we have on edge channel?
[14:24] <cachio> mvo, ~
[14:24] <roadmr> cachio: a branch on snapd? (I think it's possible but I may be unclear about what you want)
[14:25] <cachio> roadmr, I need a branch with the same content than the core snap on edge channel
[14:26] <roadmr> cachio: ah, the *core* snap.
[14:26] <roadmr> cachio: do you know which revision of the core snap that is? (as in, that sequential revision/upload number you get)
[14:26] <mvo> cachio: that should be lp:snapd-vendor
[14:26] <mvo> cachio: well, roughly, let me double check
[14:27] <roadmr> cachio: (does mvo's answer work for you? I may be talking about different kind of branch here haha)
[14:28] <cachio> roadmr, yes, mvo has the solution :)
[14:28] <roadmr> awesome! disregard me then :)
[14:30] <zyga-ubuntu> + exportfs -r
[14:30] <zyga-ubuntu> exportfs: localhost:/srv/nfs: Function not implemented
[14:30] <zyga-ubuntu> hmmm
[14:30]  * zyga-ubuntu fires up 14.04 to experiment with nfs
[14:33] <cachio> mvo, I got it, I'll implement the change to clone that repo instead of the github one
[14:34] <cachio> mvo, that, when we run on edge
[14:38] <mup> PR snapd#3954 opened: snap: introduce structured epochs <Created by chipaca> <https://github.com/snapcore/snapd/pull/3954>
[14:38] <Chipaca> ^^ tadaa ^^
[14:38] <Chipaca> now to take a break, have some tea, and then fix some tests
[14:38] <zyga-ubuntu> enjoy :)
[14:59] <ondra> mzanetti ping
[15:05] <mvo> mzanetti: re the ipv6 problem that ondra told me about, I wonder if it makes a difference if you set "GODEBUG=netdns=cgo" in /etc/environment, i.e. if that changes the behaviour
[15:13] <pstolowski> mvo, hey, i'm getting http://pastebin.ubuntu.com/25586715/ with master, do we have a new dependency or something?
[15:17] <nacc> so i have a rather urgent, but perhaps obvious, classic snap question. We are building git-ubuntu on LP usingn artful. However, the core snap is (aiui) still 16. The python3 in artful is now linnked against the new glibc in artful, and (looking at unsquashfs -l), even though it's a dependency, the artful glibc is not in my snap. How isthis supposed to work on xenial, e.g. Even on artful, potentially.
[15:17] <nacc> i get messages like: /snap/git-ubuntu/x1/usr/bin/python3: relocation error: /snap/git-ubuntu/x1/lib/x86_64-linux-gnu/libdl.so.2: symbol _dl_catch_error, version GLIBC_PRIVATE not defined in file libc.so.6 with link time reference
[15:18] <nacc> i'm 'faking' PATH/LD_LIBRARY_PATH confinement in my snap
[15:18] <nacc> also what's odd is that if I have the CORE_SNAP default library paths first in LD_LIBRARY_PATH, it works fine, but then stuff I've installed in my snap (e.g., xz) fail, because the xz from the core snap is a different version than the one in artful
[15:20] <nacc> confusingly, right now, the snap segfaults on artful hosts! ... i think this is all tied together, but i'm not sure
[15:21] <mvo> pstolowski: hm, with master - strange. what machine is failing? 14.04?
[15:21] <abeato> mvo, hi, is it possible to explicitly define the writable partition in gadget.yaml? it is not usually defined: http://paste.ubuntu.com/25586755/
[15:22] <Chipaca> nacc: why are you building on artful?
[15:23] <nacc> Chipaca: i need newer dependencies than are on artful
[15:23] <nacc> *that* are
[15:24] <nacc> (e.g., git from artful, etc.)
[15:24] <pstolowski> mvo, 16.04, it's my xenial system
[15:24] <Chipaca> nacc: can't you pull them from git or something like that?
[15:25] <nacc> Chipaca: i'm trying to make sure what we build (e.g., dpkg) is built exactly like the debs, this is for working on source package uploads to Ubunut
[15:25] <nacc> *Ubuntu
[15:25] <nacc> Chipaca: if the answer is that I shouldn't be using the Artful option, I really think it should be disabled in LP
[15:26] <Chipaca> nacc: if the libc in artful diverges, then yes, I don't think you should be building on artful
[15:26] <Chipaca> nacc: but … maybe i'm wrong, maybe there's a trick to it that sergiusens or mvo know about?
[15:27] <nacc> Chipaca: that seems ... horribly broke
[15:28] <nacc> Chipaca: right? that it's even an optio?
[15:28] <nacc> I don't see how it can work :)
[15:28] <nacc> I'm also very confused why LD_LIBRARY_PATH's order (core snap libs then my snap's libs) works but the other order (with the same contents!) fails
[15:29] <Chipaca> nacc: wrt it being an option, you could be building a snap that doesn't use the libc, for example
[15:30] <nacc> Chipaca: unless you recurse and ensure that nothing you depend on uses libc, it doesn't work
[15:30] <ogra_> nacc, if you reallly are artful i'd suggest to go artfull all the way and simply drop the core snaps LD_LIBRARY_PATH completely
[15:31] <nacc> Chipaca: e.g., i'm using the python3 from artful (apparnetly, even though it's a dependency of my stage-packages, not explicitly listed)
[15:31] <ogra_> (and indeed ship *all* yu need in your snap)
[15:31] <nacc> ogra_: doesn't work
[15:31] <nacc> ogra_: there's *no* libc in my sap
[15:31] <nacc> *snap
[15:31] <ogra_> ship one ;)
[15:31] <nacc> ogra_: so i have to explicitly list every dependency?
[15:31] <ogra_> (might need some hacking since snapcraft willl ty to strip it out)
[15:31] <nacc> and every depdendency of every dependency
[15:31] <Chipaca> nacc: it sounds to me like your issue is more with snapcraft that snapd
[15:31] <Chipaca> nacc: may i suggest you try the rocketchat?
[15:32] <nacc> Chipaca: i have non idea where it is :)
[15:32] <Chipaca> nacc: rocket.ubuntu.com/channel/snapcraft
[15:32] <Chipaca> more snapcrafters there
[15:32] <nacc> Chipaca: ok
[15:32] <Chipaca> nacc: wrt recursing and ensuring, yes, you are responsible for all your dependencies
[15:33] <nacc> Chipaca: i feel like htat's pretty false advertising :)
[15:33] <nacc> Chipaca: as obviously if i'm installing python3-keyring from artful, i want python3 from artful which wants libc from artful
[15:33] <Chipaca> nacc: what part is false advertising?
[15:34] <nacc> Chipaca: what I just said -- i'm asking for a package to be staged, which obviously means its dependencies
[15:34] <Chipaca> nacc: right, you're talking about what you tell snapcraft and what it does
[15:34] <Chipaca> nacc: i'm saying, it's your responsibility to get all the dependencies into the snap
[15:35] <nacc> yeah, that's the false part (to me). i'll ask in rocket
[15:35] <nacc> Chipaca: yep, i get that
[15:35] <nacc> I thought I was! :)
[15:35] <Chipaca> nacc: yeah
[15:35] <Chipaca> nacc: snapcraft might not support doing that, or there might be a flag for "ship all of libc" that defaults to 'no'
[15:35] <ogra_> yeah
[15:35] <ogra_> snapcraft has a blacklist for some packages
[15:35] <Chipaca> in the early days there was a long list of what is libc, that was skipped for packing
[15:35]  * zyga-ubuntu scratches head with nfs on trusty
[15:36] <ogra_> libc is ne of them iirc
[15:36] <nacc> Chipaca: yeah, i can see that
[15:36] <Chipaca> but i haven't kept up; it might be smarter today
[15:36] <nacc> but the LP builder needs to be smarter, it feels like
[15:36] <Chipaca> at least a flag should be doable :-D
[15:36] <nacc> or somethingn does :)
[15:36] <Chipaca> nacc: snapcraft, probably
[15:36] <nacc> yeah
[15:36] <Chipaca> if we want this to work at least
[15:36] <nacc> I guess that's true
[15:36] <ogra_> but using artful packages is a bad idea in general ... and in classic even wrse
[15:36] <Chipaca> nacc: of course, the promise-the-sky answer is that what you want is an artful _base_
[15:36] <Chipaca> :-)
[15:37] <nacc> Chipaca: right exactly
[15:37] <nacc> and honnestly, at first, that's what i thought i was goingn to get
[15:37] <nacc> build o artful, use artful as your base
[15:37] <Chipaca> nacc: bases aren't there that
[15:37] <Chipaca> there yet*
[15:37] <ogra_> yea
[15:37] <nacc> Chipaca: ok
[15:37] <ogra_> thats still a bit out
[15:37] <mvo> 2.28 will have basic support! 2.29 will be better
[15:38] <ogra_> but we wont have usable base snaps yet
[15:38] <Chipaca> woo!
[15:38] <Chipaca> 10 GOTO 10
[15:38]  * Chipaca snaps that
[15:38] <ogra_> even idf snapd can handle them
[15:38] <Chipaca> mvo: i'll leak that bit of info to whoever it was said we supported android os
[15:38] <ogra_> (and my bet is also that we wont look into having an artful base as first thing)
[15:38] <nacc> yeah
[15:39] <nacc> well, it was just surprising, this all worked just until libc moved
[15:39] <nacc> (in artful)
[15:39] <mvo> Chipaca: *cough*
[15:39] <nacc> so i hadn't considered any of this :)
[15:39] <zyga-ubuntu> darn, that was easy, nfs was just disabled on startup
[15:39] <Chipaca> mvo: <mvo> 2.28 will have basic support!
[15:39] <Chipaca> mvo: you can't take it back now
[15:39] <Chipaca> nacc: sorry :-( good thing artful's release is ages away still right?
[15:39] <ogra_> Chipaca, 10 print hello; 20 goto 10
[15:40] <nacc> Chipaca: well, the snap is already published :)
[15:40] <nacc> Chipaca: and was working until yesterday
[15:40] <nacc> when it happened to rebuild with the new libc
[15:40] <nacc> i think, at least
[15:40] <Chipaca> i wonder how many other (classic) snaps are now dead
[15:40] <nacc> yeah
[15:40] <ogra_> blame infinity !
[15:40] <nacc> well, i don't know if anyone was 'clever' like me :)
[15:40]  * ogra_ bets many people were 
[15:41] <Chipaca> well, 1. classic snap, 2. built on artful
[15:41] <nacc> cjwatson: may be able to tell us if it's searchable
[15:41] <nacc> (from a LP admin view, that is)
[15:41]  * ogra_ just had a discussion this week about LP offering artful builds ... 
[15:42] <nacc> ogra_: as in, it shouldn't? :) or what was the discussion?
[15:42] <ogra_> seems you are the first one being hit by it :)
[15:42] <cjwatson> not easily searchable
[15:42] <nacc> cjwatson: ok, thakns
[15:42] <ogra_> nacc, yeah, my pinion was that it shouldnt
[15:42] <nacc> or it should't until bases are available that correspond, i guess
[15:42] <ogra_> but in the end the thing we discussed was a test build... nothing to be released (who would do that anyway :) )
[15:43] <nacc> lol
[15:44] <ogra_> Chipaca, given that cllassic will also mix-mesh with the actual host rootfs i wnde if artful classic snaps woulldnt even be broken when theer are base snaps available
[15:44] <ogra_> *wonder
[15:44] <nacc> ogra_: yeah i had to write my own wrappers (expected) so that everythingn was in my snap or the core snap
[15:44] <nacc> libs and binaries
[15:44] <ogra_> yeah
[15:45] <nacc> a few other classic snappers have blogged the same (incl. didrocks)
[15:45] <ogra_> well, you can always use some wget and dpkg -x to actually squeeze the artful libc in from a scriptlet ... not sure what the outcome would be thouogh
[15:45] <nacc> heh
[15:46] <nacc> yeah
[15:46] <ogra_> most likely i gigantic snapcraft.yaml but perhaps a working snap in the end
[15:46] <ogra_> s/i/a/
[15:46] <ogra_> might end up like a multi page novel
[15:48] <pstolowski> mvo, weird. I had to install libc6-dev-i386 and it works again. did anything change?
[15:48] <mvo> Chipaca: lol
[15:48] <mvo> pstolowski: yeah, but that is part of the new build-depends
[15:48] <nacc> ogra_: yeah, that's what i'm worried about :)
[15:48] <mvo> pstolowski: we build a i386 syscall test runner now on amd64 to test secondary arch syscall support
[15:49] <ogra_> nacc, btw, did you try to drop things like xz from the snap to get around the difference you described initially ?
[15:49] <pstolowski> mvo, ok, makes sense. i should have looked at build-depends.. thanks
[15:49] <nacc> ogra_: if i do that, xz will fail (the versionin the core snap is too old for gbp)
[15:49] <nacc> ogra_: well, gbp will fail, rather
[15:50] <nacc> ogra_: the stuff i've added is stuff that is too old in the core snap
[15:50] <nacc> ogra_: so i could do that, and then the snap becomes less useful
[15:51] <ogra_> ah
[15:51] <nacc> ogra_: goes back to why i'm buildinng on artful in the first place :)
[15:55] <Chipaca> pstolowski: that's what my going on about "apt build-dep ./" was
[15:56] <Chipaca> brb
[15:57] <pedronis> mvo: I pushed some tweaks to #3935 , probaly merit a check/recheck
[15:57] <mup> PR #3935: cmd/snap-repair: implement the repair run loop <Created by pedronis> <https://github.com/snapcore/snapd/pull/3935>
[15:57] <mvo> pedronis: great, I have a look
[16:10] <nacc> sergiusens: how do you know the apt you are bulidig doesn't also need a newer dpkg in snapcraft's snapcraft.yaml ?
[16:10] <nacc> based upon what you said, you should also be building dpkg from src, no?
[16:13] <nacc> it feels like that if one is make a classic snap, one should simply not use stage-packages at all, as those come from teh build env (or one should only build on the same build env as the core snap, which is not an explicit binding in the yaml)
[16:14] <ogra_> well, the latter is the expectation
[16:14] <nacc> right, but that should be a hard rule then
[16:14] <ogra_> whioch is why you should always use cleanbuild ... because this makes sure to use the right env
[16:14] <nacc> yeah
[16:15] <kalikiana> elopio: Could you give https://github.com/snapcore/snapcraft/pull/1382 another look? I apologize for the messy diff... branch is okay locally
[16:15] <mup> PR snapcraft#1382: rust plugin: make libc configurable <Created by kalikiana> <https://github.com/snapcore/snapcraft/pull/1382>
[16:15] <ogra_> (and by default LP as well as build.s.io both default to xenial only builds)
[16:15] <nacc> ogra_: yep, i get that -- this just makes our snap so cumbersome as to possibly make it not worth my time and I should just make a .deb instead
[16:15]  * kalikiana really hates git sometimes when a simple thing in bzr becomes a huge mess in git
[16:16] <ogra_> kalikiana, you mean like *every freaking command* ?
[16:17] <sergiusens> nacc so, following in up here, the only reason this is a snapcraft problem is because snapd hasn't gone down the hard path of injecting library path through the linker and overriding the chosen linker to use to load (all hard things)
[16:17] <nacc> sergiusens: i don't know, is that true?
[16:17] <sergiusens> nacc that said, zyga-ubuntu did most of the initial work on classic so should be aware of the limitations
[16:17] <nacc> sergiusens: becuase afaict, there is no artful libc in either my snap or the core snap
[16:17] <nacc> sergiusens: so even if the loader was updated, i have to somehow get libc from artful into my sap
[16:17] <ogra_> sergiusens, well, snapcraft would still prevent your snap from shipping another libc, wouldnt it ?
[16:18] <kalikiana> ogra_: Yeah, kind of. Sometimes even typos in 'git add'. Rebase especially because there's no snaity checks or defaults
[16:18] <sergiusens> nacc and last but not least, we came up with the idea that all runnables should be compiled through snapcraft, it sort of doesn't matter where as we ensure the linker from the core snap gets picked and restrict library loading through rpath
[16:18] <nacc> sergiusens: right, but your example (snapcraft itself) does't compile dpkg
[16:18] <nacc> which apt runs
[16:18] <ogra_> kalikiana, i know what you mean ... after using git for a while i get along but it is still making me want to throw my laptop out of the window regulary
[16:18] <nacc> so I don't have a good example yet
[16:19] <nacc> kalikiana: i suggest usign `git add -i`
[16:19] <sergiusens> nacc ah, ic your point now
[16:19] <nacc> kalikiana: less chance of typos
[16:19] <nacc> sergiusens: and so i have to manually (it seems) recurse to the lowest level of every dep and build those all by hand? what's the pointn :)
[16:20] <sergiusens> kalikiana sometimes, if rebases get hard, from master: `git checkout -b new-branch; git merge --squash old-branch; git brand -D old-branch`
[16:20] <ogra_> or just convince upstream to use bzr :)
[16:21] <sergiusens> nacc I never said I liked classic, the work was dumped on me to get it working without even consulting with me ;-)
[16:21] <ogra_> classic is an ugly wart ...
[16:21] <nacc> sergiusens: right but it's there, and either should work, or should have large alarms for stuff that's fundametnally broken, or should be disabled :)
[16:21]  * sergiusens blames zyga-ubuntu
[16:21] <elopio> kalikiana: sure
[16:21] <sergiusens> he said it was going to be easy ;-)
[16:21] <ogra_> it is the equivalent of slef containing tarballs you dump into /opt ... just with a proper update mechanism
[16:21] <sergiusens> nacc right, but we also only really support building on xenial
[16:21] <nacc> kalikiana: are you sayign there aren't actually 136 commits between you and the target branch?
[16:22] <nacc> ogra_: yep, i agree
[16:22] <kalikiana> sergiusens: I actually did that, well using 'git am', because rebase took an old copy of master, rebasing of which would've introduced conflicts in files I never touched
[16:22] <nacc> ogra_: really, i don't need to be a classic snap (although that wonn't solve this problem), I just ened to be able to write anywhere o the disk the user tells me to and have network
[16:22] <ogra_> so just make your "tarball" contain *everything* ;)
[16:23] <kalikiana> nacc: 'git log' shows me 8 commits on top, which is my actual changes, and the diff affects 4 files
[16:23] <zyga-ubuntu> sergiusens: what? :)
[16:24] <nacc> kalikiana: oh i see what you did
[16:24] <nacc> kalikiana: snapcraft:rust-musl is 128 commits behind master
[16:24] <nacc> kalikiana: so you rebased onto master? which was definitely wrong
[16:24] <nacc> kalikiana: you should have rebased onto snapcraft:rust-musl
[16:24] <ogra_> nacc, just make your snapcraft.yaml wget http://cdimage.ubuntu.com/ubuntu-base/daily/current/artful-base-amd64.tar.gz ... unpack it from a prepare scriptlet and do the rest on top ;)
[16:25] <nacc> right, but *every* classic snap on artful shold be doing htat
[16:25] <ogra_> no
[16:25] <nacc> if they use libc, that is
[16:25] <ogra_> not if it isnt *built* on artful
[16:25] <nacc> sorry, that's what i meant by 'on artful'
[16:25] <ogra_> :)
[16:25] <nacc> *built on nartful
[16:25] <nacc> *artful
[16:25] <ogra_> nerdful :)
[16:25] <nacc> or any non-xenial, really
[16:26] <nacc> it should use the corresponding base image
[16:26] <nacc> but is that really sane to do?
[16:26] <ogra_> yes, but we dont have that yet ... the tarball above would be a workaround though
[16:26] <nacc> yeah
[16:26] <nacc> ok, i'll try that first before going down the path of rebuilding everything
[16:26] <ogra_> well, it is as sane as the whole classic concept
[16:26] <nacc> lol
[16:26] <ogra_> :)
[16:27] <nacc> also i find it frustrating that i have to use VMs to build artful snaps (cleanbuild just doesn't have any optio)?
[16:27] <nacc> (e.g., it should take a lxd-profile)
[16:27] <ogra_> well, you should simply not use artful until we have a base snap for it
[16:28] <ogra_> else ... there is the path through the hoops -> ...
[16:28] <nacc> ok
[16:28] <kalikiana> nacc: I don't follow - this is rust-musl, and I wanted to rebase it onto master... I don't see how I rebase on the same branch...?
[16:28] <nacc> I mean, I won't be able to convince anyone who has bought into snaps 100% that this is insane :) I know that
[16:28] <ogra_> the really really awesome thing is ... snapcraft will actually let you do this :)
[16:28] <nacc> ogra_: but yeah, that is nice
[16:28] <nacc> at least there is a workaround
[16:29] <ogra_> you might end up with a giant snap and a 70page snapcraft.yaml ... but it will definitely be possible in the end
[16:29] <nacc> kalikiana: https://github.com/snapcore/snapcraft/pull/1382 says you are merging from kalikian:rust-musl to snapcore:rust-musl
[16:29] <mup> PR snapcraft#1382: rust plugin: make libc configurable <Created by kalikiana> <https://github.com/snapcore/snapcraft/pull/1382>
[16:29] <nacc> kalikiana: not snapcore:master
[16:29] <nacc> kalikiana: also *your* rust-musl is not snapcore's rust-musl
[16:29] <nacc> distributed development ftw
[16:30] <sergiusens> merge and rebase are different; if you look at the command I gave out, it is much easier to operate if you don't want to replay the commits on top of the new base and just squash instead
[16:30] <kalikiana> nacc: Hmm interesting. How did that even get there. There's not supposed to be any branch like that - there's only master, and I proposed rust-musl, based on master... *scratches head*
[16:32] <kalikiana> nacc: Ha, changing the base branch fixes it! Awesome that you pointed this out!
[16:33] <kalikiana> elopio: Mess fixed :-D ^^
[16:34] <kalikiana> On a side note, this is working wonders for my headache, great distraction
[16:35] <nacc> sergiusens: you can do that with interactive rebase
[16:35] <nacc> sergiusens: which i would suggest over merge
[16:35] <nacc> sergiusens: and 'merge' in github terminology can be FF
[16:35] <nacc> (well git terminology, but the PR/MP model)
[16:37] <nacc> kalikiana: yw
[16:38] <sergiusens> nacc oh yeah, but replaying the first commit my make you scratch your head if the following ones are where you fixed things
[16:38] <nacc> sergiusens: true, yeah, i don't know the details in this case
[16:38] <nacc> sergiusens: i tend to do iterative development with wip commits (possibly with where they need to be squashed to), then innteractive rebase, squash appropriately
[16:38] <sergiusens> nacc from the name, it is a month old branch, so many things have changed :-)
[16:38] <nacc> sergiusens: yeah :)
[16:43]  * sergiusens gets started with lunch
[16:44] <kalikiana> elopio: wrt #1530 I really don't know what to make of your phrasing... you sound doubtful if it's worth it, and you had doubts before... how am I supposed to see value in it? :D
[16:44] <mup> PR #1530: asserts: add cross checks for snap asserts <Created by pedronis> <Merged by pedronis> <https://github.com/snapcore/snapd/pull/1530>
[16:45] <kalikiana> erm that was supposed to be https://github.com/snapcore/snapcraft/pull/1530
[16:45] <mup> PR snapcraft#1530: tests: share the cache dir in the integration suite <Created by elopio> <https://github.com/snapcore/snapcraft/pull/1530>
[16:51] <pedronis> merging stuff today seems a bit hopeless
[17:25] <Chipaca> ok, dinner etc
[17:25]  * Chipaca goes
[17:26] <kalikiana> elopio: kyrofa https://github.com/snapcore/snapcraft/pull/1348 would appreciate another look
[17:26] <mup> PR snapcraft#1348: repo: setup a foreign arch and sources <Created by kalikiana> <https://github.com/snapcore/snapcraft/pull/1348>
[18:25] <zyga-ubuntu> eh
[18:25] <zyga-ubuntu> github
[18:58]  * kalikiana concludes tonight, as headache gets too bad to work, and investigating why inside lxd 'fuser /var/lib/apt/lists/lock' doesn't seem to work as expected is proving tedious
[19:25] <kyrofa> elopio, I have an odd request: any chance you have a fedora VM lying around?
[19:29] <kyrofa> Or really anyone: I could use SSH access to a fedora machine for a few minutes
[19:31] <kyrofa> Stupid under-powered eeepc
[19:32] <kyrofa> zyga-ubuntu, I saw a forum post from you about your setup. Any fedora installs on there?
[19:32] <zyga-ubuntu> yay, so tests pass
[19:33] <kalikiana> kyrofa, wouldn't be surprised if Pharaoh_Atem had one for you
[19:33] <zyga-ubuntu> kyrofa: hey, one but not wired (I use it for various things)
[19:33] <zyga-ubuntu> kyrofa: what would you find useful?
[19:33] <zyga-ubuntu> kyrofa: I can set something up and give you an account
[19:33] <kyrofa> Argh, Pharaoh_Atem you need a consistent handle :P
[19:33] <kyrofa> zyga-ubuntu, I just want to try running a snap on it
[19:34] <zyga-ubuntu> kyrofa: any arch will do?
[19:34] <kyrofa> amd64 if possible
[19:34] <zyga-ubuntu> kyrofa: I need to find some ram sticks but I can prepare a machine for tomorrow
[19:34] <zyga-ubuntu> kyrofa: that's easiest
[19:36] <kyrofa> zyga-ubuntu, ah, I need it tonight. You know, now that I bring it up, I bet DigitalOcean has me covered
[19:36] <zyga-ubuntu> kyrofa: do you have access to spread?
[19:36] <kyrofa> zyga-ubuntu, thank you for being willing, though!
[19:36] <kyrofa> I do
[19:36] <zyga-ubuntu> kyrofa: my pleasure, I'll try to set one up anyway, maybe over weekend
[19:36] <zyga-ubuntu> kyrofa: just spawn our F25 box
[19:37] <kyrofa> zyga-ubuntu, ah! It's been a while, what command would that be?
[19:40] <zyga-ubuntu> kyrofa: hmm, copy spread.yaml from snapd
[19:41] <zyga-ubuntu> kyrofa: yank everything but the fedora linode system
[19:41] <zyga-ubuntu> kyrofa: remove all the prepare stuff, essentially gut it
[19:41] <zyga-ubuntu> kyrofa: add foo/task.yaml with summary line
[19:41] <zyga-ubuntu> kyrofa: and see what "spread -list" prints
[19:42] <kyrofa> zyga-ubuntu, alright, good deal, thanks!
[19:43] <zyga-ubuntu> kyrofa: then
[19:43] <zyga-ubuntu> kyrofa: spread -shell linode:fedora-25-65:foo
[19:43] <zyga-ubuntu> kyrofa: that *should* give you a shell quickly
[19:43] <zyga-ubuntu> kyrofa: I can prepare a working spread.yaml if you want
[19:44] <kyrofa> zyga-ubuntu, if you have a sec, that would be greatly appreciating, I have a meeting now
[19:44] <kyrofa> Huh. Grammar is not my skill today
[19:51] <zyga-ubuntu> kyrofa: on it
[19:53] <sergiusens> kyrofa digital ocean ftw ;-)
[19:54] <sergiusens> then you get to control all the toggles
[19:56] <zyga-ubuntu> kyrofa: testing now
[19:59] <zyga-ubuntu> kyrofa: niiiiiice
[20:01] <zyga-ubuntu> kyrofa: let me test other systems and I'll give you a working gist
[20:06] <zyga-ubuntu> oooooh
[20:06] <zyga-ubuntu> f***********
[20:06]  * zyga-ubuntu needs to restore from backup
[20:07] <zyga-ubuntu> I removed my snapd tree
[20:07] <zyga-ubuntu> along with all the code I was working on
[20:07] <zyga-ubuntu> DARN
[20:07] <zyga-ubuntu> I copied a symlink
[20:09]  * zyga-ubuntu is pretty upset now
[20:10] <zyga-ubuntu> darn, I wonder if my backup worked
[20:10] <zyga-ubuntu> I suspect it's a few days old at least at a weekly schedule
[20:13] <zyga-ubuntu> kyrofa: that's for you http://paste.ubuntu.com/25588274/
[20:13]  * zyga-ubuntu needs to look at his backup
[20:40] <sergiusens> elopio mind taking a look at snapcraft#1564 ?
[20:40] <mup> PR snapcraft#1564: setup: changes to make installable from PyPI <Created by sergiusens> <https://github.com/snapcore/snapcraft/pull/1564>
[20:40] <mup> PR snapcraft#1564 opened: setup: changes to make installable from PyPI <Created by sergiusens> <https://github.com/snapcore/snapcraft/pull/1564>
[21:13] <sergiusens> elopio also what mup will show in a bit please :-)
[21:13] <mup> PR snapcraft#1565 opened: cli: add the assemble command <Created by sergiusens> <https://github.com/snapcore/snapcraft/pull/1565>
[21:40] <jdstrand> roadmr: hi! can you sync r933 of the review tools? not urgent; only changes are additions to overrides.py
[21:41] <roadmr> jdstrand: sure! note it's unlikely to make it to prod until after next week's rally :/
[21:41] <jdstrand> roadmr: that's totally fine
[21:41] <roadmr> jdstrand: whee :) see you there I expect?
[21:44] <jdstrand> roadmr: you bet! :)
[22:12] <Pharaoh_Atem> kyrofa: ?