[10:38] <crogers> Hey folks, anyone else having this problem with nautilus in ubuntu 17.10:
[10:38] <crogers> https://www.dropbox.com/s/pvywye6alea76j4/nautilus_undo_history_bug.movie.mp4?dl=1
[10:38] <crogers> It seems ctrl+z undo tries to undo the last file opperation rather than the last text edit.
[10:38] <crogers> nautilus devs have fixed this behoaviour previously, so it's probably an Ubuntu patch that's causing the problem.
[10:40] <rbasak> Have you filed a bug?
[10:41] <rbasak> And are you sure that the upstream fix is included in the version shipped in 17.10?
[10:42] <rbasak> probably an Ubuntu patch that's causing the problem> seems like a pretty unsubstantiated claim to me.
[10:45] <crogers> rbasak: https://gitlab.gnome.org/GNOME/nautilus/commit/bf6b1e2c2f0cd3d882f99029af79a3439bdacec1
[10:45] <crogers> there's the patch that was released.
[10:45] <crogers> nautilus devs can not reproduce the bug.
[10:46] <crogers> I'm trying to determine where to post the bug report.
[10:48] <crogers> If this bug is only showing up in Ubuntu 17.10 build of nautilus, and Ubuntu patches nautilus then what other conclusion should I draw? :)
[10:50] <rbasak> It's not a reasonable conclusion to draw unless you have found that the same area of code is being patched.
[10:53] <rbasak> FWIW, it looks like that commit is included in Artful.
[10:57] <crogers> rbasak: You're saying that some other patch could not affect the behaviour of this code?
[10:58] <rbasak> Nope. I'm saying that more commonly there are other interactions that cause "can't reproduce upstream" type bugs, such as different dependency versions, so jumping to the conclusion of "it's because Ubuntu patched" is premature - especially for those unfamiliar with the code.
[11:01] <crogers> rbasak: that's just the info I got from the nautilus devs. I guess you can argue with them about code specifics.
[11:01] <crogers> https://bugs.launchpad.net/ubuntu/+source/nautilus/+bug/1729264
[11:02] <crogers> there's your bug report. Please let me know if I can help in any way.
[11:02] <crogers> Feel free to rename it if you don't like the title. :)
[11:03] <rbasak> Done. Your claim is unsubstantiated and unconfirmed and does not belong in the bug title.
[11:03] <rbasak> Thank you for filing the bug.
[11:03] <crogers> rbasak: no problem. :)
[11:04] <crogers> happy to help.
[11:04] <rbasak> Otherwise we might as well rename *all* bugs that cannot be reproduced in the same version upstream. That would be pointless :)
[11:05] <crogers> rbasak: Thanks for the feedback. It will help me produce better bug titles in the future.
[11:06] <rbasak> crogers: you're welcome. It's best to focus on exactly what behaviour is wrong. Feel free to speculate, but please keep that in the details and make it clear what is fact vs. what is speculation.
[11:07] <crogers> rbasak: Noted. Again, just passing on what nautilus devs told me. I'm not qualified to draw proper conclusions. I'm just happy if I can get enough information for a useful bug report. :)
[11:07] <rbasak> I'm sorry I can't spare the time to debug this further. Hopefully someone from the desktop team can do it.
[11:07] <rbasak> But a bug report is definitely the first step :)
[11:08] <crogers> rbalint: no problem. It's not a major issue for me, just a minor annoyance.
[11:08] <crogers> er oops
[11:09] <crogers> rbasak: ^
[11:09] <crogers> rbalint: Sorry for the noise.
[11:11] <crogers> Thanks everyone for making ubuntu better!
[12:26] <andreas> hi, does anybody know if the systemd network-online.target includes wireless networks that only come up after a user logs in on a desktop?
[13:03] <LaurenceLumi> I do #pull-lp-source imagemagick zesty
[13:04] <LaurenceLumi> but get this warning:
[13:04] <LaurenceLumi> gpgv: Signature made Mon Jul 31 13:52:14 2017 BST using RSA key ID A744BE93
[13:04] <LaurenceLumi> gpgv: Can't check signature: public key not found
[13:04] <LaurenceLumi> what do I need to verify the key
[13:11] <slashd> LaurenceLumi, try this "sudo gpg --keyserver keyserver.ubuntu.com --recv-keys A744BE93"
[13:11] <slashd> and retry
[13:19] <LaurenceLumi> no joy, if I run sudo gpg --fingerprint I can see it was added
[13:32] <LocutusOfBorg> sudo? LaurenceLumi you don't need it, maybe you are adding to the root keyring, not to the current user one?
[13:42] <LaurenceLumi> I think the files in ~/.gnupg ended up being owned by root, if fixed that,
[13:42] <LaurenceLumi> gpg --fingerprint shows:
[13:42] <LaurenceLumi> pub   4096R/A744BE93 2014-06-16 [revoked: 2016-08-16]
[13:42] <LaurenceLumi>       Key fingerprint = 2445 F8BC A621 74B9 2DFF  04E3 F021 0224 A744 BE93
[13:42] <LaurenceLumi> uid                  Marc Deslauriers <marcdeslauriers@videotron.ca>
[13:42] <LaurenceLumi> pub   4096R/A744BE93 2010-09-30
[13:42] <LaurenceLumi>       Key fingerprint = 50C4 A0DD CF31 E452 CEB1  9B51 6569 D855 A744 BE93
[13:42] <LaurenceLumi> uid                  Marc Deslauriers <marcdeslauriers@videotron.ca>
[13:43] <LaurenceLumi> uid                  Marc Deslauriers <mdeslaur@ubuntu.com>
[13:43] <LaurenceLumi> uid                  Marc Deslauriers <mdeslaur@canonical.com>
[13:43] <LaurenceLumi> uid                  Marc Deslauriers <marc.deslauriers@ubuntu.com>
[13:43] <LaurenceLumi> uid                  Marc Deslauriers <marc.deslauriers@canonical.com>
[13:43] <LaurenceLumi> sub   4096R/37AD7647 2010-09-30
[13:45] <LaurenceLumi> Am I getting the error becuase the key is revoked?
[13:50] <mdeslaur> LaurenceLumi: that's not my key
[13:50] <mdeslaur> LaurenceLumi: my key has fingerprint 50C4 A0DD CF31 E452 CEB1  9B51 6569 D855 A744 BE93
[13:50] <mdeslaur> you imported two keys with a short id collisio
[13:50] <mdeslaur> short id collision
[13:51] <mdeslaur> LaurenceLumi: you probably need to remove the revoked one from your keyring
[13:51] <LaurenceLumi> Ok, how do I delete, and import them correctly
[13:51] <LaurenceLumi> much appreciate the help
[13:51] <Faux> That's incredibly unlucky.
[13:52] <cjwatson> That's not unlucky - somebody generated collisions for the whole strong set in bulk, essentially as an argument that everyone needs to stop using short 32-bit key IDs
[13:52] <cjwatson> gpg --delete-key 2445F8BCA62174B92DFF04E3F0210224A744BE93 to delete the collided one, and then you have the correct one already
[13:53] <cjwatson> slashd: please don't recommend that anyone uses 32-bit key IDs for anything
[13:55] <LaurenceLumi> Ok, I just deleted both, just for fun...
[13:55] <LaurenceLumi> how do import it correctly
[13:55] <cjwatson> you already did
[13:55] <LaurenceLumi> orginally i did
[13:55] <LaurenceLumi> gpg --keyserver keyserver.ubuntu.com --recv-keys A744BE93
[13:55] <cjwatson> oh, you deleted it, "just for fun"
[13:56] <LaurenceLumi> well just learning...
[13:56] <LaurenceLumi> trying to figure how to do things properly, i.e. how do get the correct on in first go
[13:56] <cjwatson> what I recommend is that you put "keyid-format long" in ~/.gnupg/gpg.conf (or ~/.gnupg/options if that file exists)
[13:58] <LaurenceLumi> Ok, I think I am getting it,
[13:58] <cjwatson> (testing)
[13:58] <cjwatson> sigh, that still doesn't cause gpgv to output the long key ID.  Not helpful, gpgv!
[13:59] <cjwatson> I mean, you don't actually have to verify this, since it's already been verified by virtue of being in Launchpad and otherwise you have to figure out whether the key it was signed by is one authorised to upload to Ubuntu
[14:00] <cjwatson> since you don't have a trust path to the key in question, there isn't much value in you independently verifying it
[14:01] <cjwatson> but to get the key in a safer way, the simplest way I can find is to run "gpg --verify imagemagick_6.9.7.4+dfsg-3ubuntu1.2.dsc" which says "using RSA key 6569D855A744BE93", and then you can run "gpg --keyserver keyserver.ubuntu.com --recv-keys 6569D855A744BE93"
[14:01] <cjwatson> (slashd shouldn't have recommended "sudo gpg" here - it's just gpg)
[14:02] <cjwatson> after having done that you can then run "gpg --verify imagemagick_6.9.7.4+dfsg-3ubuntu1.2.dsc" again
[14:03] <cjwatson> gpg's UI is fairly nasty though, and I do recommend thinking about whether verifying the signature in fact buys you anything meaningful in this particular case.  Obviously you do need to do it if you've acquired a package in a way that could have been man-in-the-middled
[14:04] <slashd> cjwatson, noted
[14:06] <tacocat> is there anything I have to do to resync a package that had a source rename in Debian?
[14:09] <LaurenceLumi> cjwatson thanks, I went through those steps thoroughly I can verify the package with "gpg --verify imagemagick_6.9.7.4+dfsg-3ubuntu1.2.dsc" but why does
[14:10] <LaurenceLumi> pull-lp-source imagemagick zesty still throught an error
[14:10] <LaurenceLumi> gpgv: Can't check signature: public key not found
[14:11] <LaurenceLumi> is that becauase This key is not certified with a trusted signature!
[14:11] <LaurenceLumi> ?
[14:18] <cjwatson> gpgv uses a separate keyring, apparently
[14:18] <cjwatson> ~/.gnupg/trustedkeys.gpg
[14:18] <cjwatson> honestly I wouldn't worry about it if you've separately verified the package
[14:22] <LaurenceLumi> OK, now I get thanks
[14:47] <jbicha> bdmurray: What is "Install updates" in ubiquity supposed to do?
[14:48] <jbicha> because I just installed Ubuntu GNOME 17.04, rebooted, ran sudo apt update, and apt list --upgradable and I am surprised that there are lots of security updates in that list
[14:50] <jbicha> my interpretation of the ubiquity wording was that it should install all updates, security or not; but maybe it's not even doing either?
[14:56] <bdmurray> jbicha: I'm not positive - cyphermox or xnox may know.
[14:58] <xnox> jbicha, that tick box, downloads updates in the background whilst the system is installing / configuring.
[14:58] <jbicha> ahem
[14:58] <xnox> jbicha, expectation is that a system after such an install, is booted without network, but should have loads of debs in /var/lib/archive
[14:59] <xnox> jbicha, but it all depends on how slow/fast install is, cause i think on ssd the install will finish before that option can do anything useful.
[14:59] <bdmurray> So installing is misleading?
[14:59] <xnox> bdmurray, possibly became obsolete due to much faster hardware these days - all laptops circa 5-10 years are ssd now?
[14:59] <jbicha> oh I guess I was misreading the actual wording: https://github.com/googlei18n/noto-fonts
[15:00] <cjwatson> It used to be titled "Download updates while installing".  Is it not still?
[15:00] <jbicha> https://www.linuxtechi.com/ubuntu-17-10-installation-guide-screenshots/
[15:00] <jbicha> obviously not noto
[15:01] <bdmurray> cjwatson: It is worded that way
[15:01] <jbicha> that seems surprising that "download updates" may only download a few updates
[15:01] <jbicha> it confused me that "download updates" and "install third-party software" are on the same page. I expected it to install updates
[15:02] <jbicha> I don't understand how it's useful to just download .debs without installing them
[15:03] <jbicha> and I don't understand your hypothetical example where the installer has networking but there isn't networking a few minutes later upon reboot
[15:03] <bdmurray> jbicha: its useful in the way it says "saves time after installation"
[15:03] <jbicha> it doesn't really save time if it's not going to bother installing them
[15:05] <jbicha> since we install other packages (language packs, optional mp3 support), why don't we install at least security updates too so it's fairly secure "out of the box"?
[15:05] <xnox> jbicha, it saves a lot of time when using e.g. 16.04.0 but there are a lot of things to upgrade post install.
[15:05] <xnox> jbicha, the 3rd party stuff is actually installed, i.e. nvidia
[15:07] <jbicha> would it be better if I open a bug or start a list discussion? I don't think the question we ask during the install is worth asking
[15:08] <jbicha> we don't need permission to download security updates by default and IMO we don't even need permission to install them by default either
[15:09] <mdeslaur> hrm, I'm not sure how well installing security updates would work during the installer, since it's not the real system
[15:10] <jbicha> the same way as installing and uninstalling language packs?
[15:10] <mdeslaur> those don't really have complex maintainer scripts
[15:10] <cjwatson> those are super-specialised
[15:10] <cjwatson> I mean seriously, the installer in fact relies on how simple they are
[15:11] <cjwatson> it can "uninstall" them by ignoring all their files while copying the system and then doing a bit of tidy-up at the end
[15:11] <cjwatson> which makes a serious speed difference for some images
[15:12] <jbicha> am I wrong in thinking that ubuntu-restricted-addons shouldn't be an especially simple set of packages to install either?
[15:15] <LaurenceLumi> question everything I do inside pbuilder login is thrown away while I exit correct?
[15:16] <cjwatson> jbicha: I think that's basically dropping files in place and doesn't tend to e.g. involve maintainer scripts that interact with services
[15:17] <cjwatson> jbicha: anyway IMO the question should be between downloading updates or not doing so, not between installing them or not; there's a general principle that we need to get out of the installer ASAP and let the user do useful work
[15:17] <cjwatson> LaurenceLumi: yep, as the documentation says
[15:18] <LaurenceLumi> That's what I thought but when I read this https://wiki.ubuntu.com/PbuilderHowto#dpkg_setting
[15:18] <LaurenceLumi> it sort of suggests otherwise :-)
[15:19] <LaurenceLumi> or have I mindunderstood what the guide is saying
[15:20] <jbicha> cjwatson: ok, do we even need to bother asking for permission to download updates in the background? since that's done by default after install anyway
[15:20] <tarzeau> https://launchpad.net/~gagarin/+maintained-packages why is there not the Aa and Bb stuff listed?
[15:22] <cjwatson> LaurenceLumi: should perhaps recommend --save-after-login to make it permanent
[15:22] <cjwatson> jbicha: yeah, I'm not sure
[15:23] <cjwatson> tarzeau: I think that only lists the initial upload target, not wherever the packages in question may have been copied to since then
[15:24] <tarzeau> cjwatson: and it's not just it doesn't bother about uploaders, but only maintainers (teams mainly for my pkgs)?
[15:24] <LaurenceLumi> haha so I could just excepted all the defaults, just did a pbuilder create, then logged in with --save-after-login and made my changes manually and carried on?
[15:24] <jbicha> thanks, I'll file a bug about the download question
[15:24] <tarzeau> cjwatson: because it still seems to list packages that i'm maintainer  (not team maintained in debian), and not those (team maintained) as described above
[15:25] <tarzeau> and somehow this changed, some months ago (thus breaking kinda the karma counter)
[15:25] <tarzeau> where would the debian team pages on lp be? to check?
[15:27] <cjwatson> tarzeau: I'm pretty certain none of that has changed for years
[15:27] <cjwatson> Debian teams aren't necessarily represented on LP at all, although you can search by email address on https://launchpad.net/people
[15:28] <cjwatson> karma is pretty arbitrary and not every possible action is guaranteed to be credited there
[15:30] <cjwatson> LaurenceLumi: there are often a few different possible ways to set things up, yes
[15:30] <TJ-> jbicha: cjwatson re:Download question. I prefer it there and usually don't enable it. The reason being I want the network's package proxy/mirror to be configured to prevent un-necessary external network activtiy.
[15:31] <LaurenceLumi> I would suggest my why is probably the beginners way, but I get it now
[15:31] <TJ-> jbicha: if there's an easy to see/set option to add the proxy address though :)
[15:31] <jbicha> TJ-: at a minimum, I think we'd want to enable the download option by default
[15:32] <TJ-> jbicha: as long as I still have the control to disable it :)
[15:32] <jbicha> I think it's funny that for as many times as I've installed Ubuntu, I didn't really read that page carefully enough
[15:33] <TJ-> jbicha: when doing repeated VM-based testing it's a pain having it always want to pull in additional packages
[15:42] <andreas> hi, does anybody know if the systemd network-online.target includes wireless networks that only come up after a user logs in on a desktop? I'd say no, but wanted to check
[15:42] <Faux> Probably depends on whether you're on networkmangler or not.
[15:43] <Faux> I believe most machines wait for network online before starting a login manager, due to most services being poorly written.
[15:43] <andreas> the wifi would be managed by network-manager, yes
[15:44] <nacc> https://www.freedesktop.org/wiki/Software/systemd/NetworkTarget/ ?
[15:46] <nacc> andreas: NM-wait-online.service appears to be before network-online.target
[15:46] <nacc> andreas: *but* it sounds like the user has configure that wireless network to onnly be available for their user?
[15:47] <LaurenceLumi> I am getting loads of ln: failed to create hard link '/var/cache/pbuilder/build/17130/var/cache/apt/archives/libustr-1.0-1_1.0.4-5_amd64.deb': File exists from a pbuilder create, what't the cause and can I ingore these?
[15:49] <Faux> My artful laptop seems to have delayed network-online.target for the wifi to come up, but it hasn't delayed the DM starting.
[15:50]  * Faux sighs at postfix not starting until the wifi is up; why would that even be a thing.
[15:50] <rbasak> There's a bug on that I think.
[15:51] <rbasak> I don't see it as particularly important (personally) though I think others were making progress in the bug. Why is postfix running on a system where the connectivity is wifi a thing?
[15:51] <Faux> So I get mail from cron?
[15:51] <rbasak> You don't need postfix for that.
[15:52] <Faux> The actual answer is because all my machines is configured the same, although I thought it was necessary for local mail delivery like that.
[15:52] <rbasak> You need a working sendmail. It doens't have to be postfix (as I suppose the name implies :-)
[15:53] <rbasak> But also, cron jobs that matter on a machine with limited connectivity?
[15:53] <Faux> Debian #854475 explains that postfix is configured to be After=network*.target just in case a user has changed the configuration, which is an argument I'm not a big fan of.
[15:53] <Faux> Things run in cron jobs. I need to work out what went wrong. Until everything is running as a systemd timer, this involves digging through mail as well as journald.
[15:53] <rbasak> Ultimately defaults cannot work for people with weird configurations in the general case. By definition.
[15:54] <rbasak> It's override-able. You should override it.
[15:54] <Faux> I could. But, as it doesn't block the DM (for me), it's not a big issue. I have overriden whatever it was that was causing it to block the DM.
[15:55] <andreas> nacc: thanks for that link, reading
[15:58] <andreas> looks like pulling in network-online is even frowned upon for server software
[16:03] <nacc> Faux: i used to have to restart postfix all the time, whenever i would go to the office, or go home, etc. It was dumb, but postfix felt dumber for not being able to handle the network changing.
[16:27] <Faux> Junk.
[16:28] <nacc> Faux: it's possible that has improved since then
[16:29] <Faux> The bug is pretty recent. Although iirc the C changes are trivial.
[16:50] <LaurenceLumi> following one of the guides on pinning, I do #apt-get -b source debhelper
[16:50] <LaurenceLumi> it compiles, but how do I correctly install it?
[16:51] <nacc> LaurenceLumi: what are you actually trying to do?
[16:51] <LaurenceLumi> follow this guide https://help.ubuntu.com/community/PinningHowto#Recommended_alternative_to_pinning
[16:53] <LaurenceLumi> actually I am trying to built imagemagick from zesty source on xenial, but I am trying to follow some of the basic guides which a full of little errors
[16:54] <LaurenceLumi> I did #apt-get -b source debhelper which created to packages debhelper_10.2.2ubuntu1_all.deb  dh-systemd_10.2.2ubuntu1_all.deb
[16:55] <Faux> My experience is that trying to backport anything deb/dpkg related ends in sadness; it's probably not a good idea.
[16:56] <LaurenceLumi> doing #apt-get -b source -t zesty debhelper as suggested by the guide did not work at all
[16:57] <LaurenceLumi> I can create packages, both using pbuilder and following the guide here https://help.ubuntu.com/community/PinningHowto#Recommended_alternative_to_pinning
[16:57] <LaurenceLumi> so I have lots of packages, but I am not sure how to install them probably. Not keen on destroying my machine,
[17:02] <nacc> LaurenceLumi: whey can't you use debhelper in x-b?
[17:03] <LaurenceLumi> what is x-b?
[17:03] <nacc> LaurenceLumi: xenial-backports
[17:05] <LaurenceLumi> I could, it as example to when following the guide, if I guide worked for debhelper I coudld tackle imagemagick
[17:05] <LaurenceLumi> ^I choose it as an example
[17:06] <nacc> LaurenceLumi: why not just use a ppa for this?
[17:07] <nacc> LaurenceLumi: do you need some feature from the zesty imagemagick?
[17:07] <LaurenceLumi> yes
[17:08] <LaurenceLumi> yes connected-components functionality
[17:08] <nacc> LaurenceLumi: why not just run zesty or artful in a VM?
[17:09] <LaurenceLumi> I grabbed the source and compiled it, but I thought I should learn how do learn some of ubuntu,
[17:09] <LaurenceLumi> so I am working my way through some of the suggested ways of doing things
[17:11] <LaurenceLumi> so far I have setup pbuilder, and sucessfull build imagemagick_6.9.7.4+dfsg-3ubuntu1.2.dsc which gave me loads a deb files
[17:11] <nacc> LaurenceLumi: ok, so istall those debs
[17:12] <LaurenceLumi> but I am not sure how to install them, and wasn't in a hurry until I understood what was going on
[17:12] <nacc> LaurenceLumi: you don't know how to install debs?
[17:12] <LaurenceLumi> no
[17:13] <LaurenceLumi> I am guessing it dpkg -i package name, but I don't understand how the depenedecy bit works
[17:13] <LaurenceLumi> hence trying to follow guides
[17:14] <nacc> LaurenceLumi: fwiw, you probably wanted to properly adjust the versionning of that
[17:14] <nacc> LaurenceLumi: dpkg doesn't do dependency resolution itself
[17:15] <nacc> LaurenceLumi: you'll need to isntall all the packages at the same time, or use somethign like gdebi
[17:17] <jbicha> sudo apt install ./foo.deb to install foo is useful too (the ./ is important)
[17:17] <LaurenceLumi> Ok I get it now
[17:17] <nacc> jbicha: ah thanks, i forgot apt can do that
[17:18] <LaurenceLumi> Ok so the guide I linked before is missing that step...
[17:18] <nacc> LaurenceLumi: i feel like any guide that is about *building* packages, assumes you know what to do with packages
[17:19] <LaurenceLumi> just feedback, from a newbe
[17:19] <LaurenceLumi> one more question, what about the previous version of ImagaMagick I installed
[17:19] <nacc> LaurenceLumi: IMO, a newbie probably should not be tryig to do this backport :)
[17:19] <nacc> LaurenceLumi: at least not on their machine
[17:20] <nacc> use a PPA or somethig
[17:20] <nacc> LaurenceLumi: what about it?
[17:21] <LaurenceLumi> actually before I go on, so as a newbie, how should I be using a PPA?
[17:22] <nacc> LaurenceLumi: the backportpackage tool
[17:26] <LaurenceLumi> OK, wish I found that before I started :-)
[17:26] <nacc> LaurenceLumi: yeah :)
[17:26] <LaurenceLumi> Ok back to hypothetical question, because I think I will investigate the backportpackage
[17:28] <LaurenceLumi> apt install ./foo.deb ./bar.deb ./foofoo.deb, what about previous versions of foo bar and foofoo?
[17:28] <LaurenceLumi> i.e. should I purge the previous release before attempting a manual install with deb packages
[17:29] <nacc> LaurenceLumi: do you purge previous versions when you upgrade normally?
[17:31] <LaurenceLumi> no, but I am not using packages directly
[17:33] <nacc> LaurenceLumi: in general it should be fine. If it is a very large jump, it's possible some upgrade paths have been dropped in the new packaging.
[17:36] <LaurenceLumi> Just looking backportpackage, am I right the big difference is that will load the *.deb that get build into a PPA so that apt-get will see them as normal packages to upgrade and everything works as expected, plus I can let other know about my PPA?
[17:37] <LaurenceLumi> as compared to using pbuilder to build the debs?
[17:38] <nacc> LaurenceLumi: it builds it in the PPA
[17:38] <nacc> LaurenceLumi: nothing runs on your system at all
[17:38] <nacc> LaurenceLumi: i believe it also will do the versioning correctly
[17:39] <cjwatson> (note that PPA response times aren't going to be great just now as we have a lot of stuff going on in our build farm)
[17:40] <nacc> cjwatson: yeah :)
[17:40] <nacc> was goingn to mention that next
[17:41] <LaurenceLumi> that's fine, will 100x faster, than the time I already spent, but at least I did learn something
[17:41] <LaurenceLumi> maybe
[17:41] <nacc> LaurenceLumi: yeah :)
[17:41] <nacc> LaurenceLumi: in your case, it doesn't seem like you really needed to learn quite as much about source packages, yhou just wanted to do a backport
[17:42] <LaurenceLumi> how do I make suggestions regarding guides, both the pinning and backports page don't mention the backportpackage
[17:42] <nacc> LaurenceLumi: and you ca configure your ppa (before doing the backportpackage) to have xenial-backports as a dependency, so that it can build with a newer debhelper, e.g., on xenial
[17:43] <LaurenceLumi> thanks nacc, I will do that
[17:44] <nacc> LaurenceLumi: which pages are you referringn to?
[17:46] <LaurenceLumi> this https://help.ubuntu.com/community/PinningHowto one and this https://wiki.ubuntu.com/UbuntuBackports
[17:46] <nacc> LaurenceLumi: the latter is a wiki page
[17:46] <LaurenceLumi> but actually the second does have
[17:46] <nacc> and i think the first is too
[17:46] <nacc> and the first shouldn't mention backportpackage, it's onnly about pinninng
[17:48] <LaurenceLumi> but the pinning actually has an alrtenative method i.e. build from source
[17:48] <LaurenceLumi> but I guess it is there,
[17:48] <nacc> ugh, that's dumb
[17:48] <nacc> I absolutely do not thing building a deb from source is a "recommended alternative to pinning"!
[17:49] <nacc> *think
[17:49] <LaurenceLumi> I agree!
[17:49] <nacc> jbicha: maybe i'm wrong, though, thoughts?
[17:49] <LaurenceLumi> I think that is what actually started me of whole pbuilder thing
[17:50] <LaurenceLumi> when I probably wanted was backportpackage
[17:50] <nacc> LaurenceLumi: yeah, i can imagine that
[17:50] <jbicha> uh, there's different levels of recommendations. Ordinary users shouldn't need to build any packages, especially not low-level ones like imagemagick
[17:51] <jbicha> but yeah, backportpackage is a lot easier than settings up a build environment
[17:52] <nacc> jbicha: i meant specifically the pinning page recommending that users build packages from source as an alternative to pinnning (and citing libc6 as a (good?) example of such a package!)
[17:53] <jbicha> uh, I think backportpackage is better than a recommendation to pin packages from a totally different Ubuntu release, yes
[17:54] <jbicha> pinning may be useful where you only want one package from a large ppa
[17:54] <nacc> right
[17:54] <LaurenceLumi> is there a way to follow throught with that suggestion, i.e. who do I email/ping?
[17:54] <nacc> i think the pinning community page shouldn't mention source packages at all
[17:56] <jbicha> if you're going to work on it, you should kill or heavily edit the Pinning Methods section where it encourages people to pick and choose Ubuntu releases to install packages from
[17:56] <jbicha> that's why there's that whole libc6/backports warning
[17:57] <nacc> jbicha: yeah, it just seems like ... bad advice :)
[17:57] <nacc> LaurenceLumi: it's a wiki page, so you just need to login and have rights
[18:29] <infinity> nacc: Gah.  Wow.  That page's suggestion that "pinning libc6 doesn't work, so try backporting!" is amazingly irresponsible.
[18:29] <infinity> And also shows that whoever wrote it doesn't grasp pinning.
[18:29] <infinity> Cause if you pin *all* of glibc's binaries, it works great.
[18:30] <infinity> Of course pinning one binary from a source package will end in tears.
[18:36] <sarnold> is -that- how we get so many bug reports from folks trying to install different versions of a package in both i386 and amd64 flavours?
[19:36] <nacc> infinity: yeah, it's terrible!
[19:36] <nacc> sarnold: i'm starting to think it's at least part of it
[19:42] <sarnold> nacc: it'd be nicest if the tools could recognize that step before taking it :)
[19:47] <infinity> sarnold: Some sort of Ubuntu Clippy that pops up and says "It looks like you're trying to do something stupid"?
[19:48] <infinity> "Would you like me to turn off all your bug reporting tools?"
[19:48] <nacc> heh
[23:02] <sarnold> infinity: haha :) I love turning off the bug reports rather than not installing the update.. hehe
[23:03] <infinity> sarnold: As to your original comment, the biggest source of arch/version skew appears to be people who install arch_X from a repo, disable said repo, then wonder why things are broken.
[23:03] <infinity> sarnold: When that's a PPA, I can sort of (but not overly) sympathise, but I see it a LOT with people who install from -updates, then disable -updates and file a bug report next time something breaks.
[23:03] <infinity> Like, wat?
[23:03] <sarnold> infinity: ohhh, that'd be hard to spot without magic version numbers..
[23:04] <sarnold> oof. with -updates disabled???
[23:04] <infinity> sarnold: Yep, see it a lot more than seems reasonable.  You'll see something complaining about "can't install foo because depends foo-common=1, but foo-common=1.1 is installed".
[23:05] <infinity> sarnold: Which means they got 1.1 from -updates, then disabled updates, then whee.
[23:05] <infinity> sarnold: And that, apparently, is our bug, not theirs.
[23:05] <infinity> I really want to be able to change a package task to a person.
[23:06] <sarnold> infinity: thanks for pointing it out, I like to be able to help folks along when I can, and this one ought to be cheap enough to spot
[23:14] <infinity> sarnold: It tends to confuse users more than it should probably because apt spells the above "is installed" as "is to be installed".
[23:14] <infinity> sarnold: Since it's reporting the final state it's trying to achieve, not the current state.
[23:14] <sarnold> infinity: hrm. did I know this? I'm .. curious :)
[23:15] <sarnold> infinity: of course there's loads of errors our users report that I have never seen before because users make great testers :)