[00:11] rbasak: the former, yes [00:11] rbasak: i think i'd need to do some refactoring, but actually, if you're ok with me pulling the parsing to its own module, i think it will make the changes pretty straightforward [00:11] rbasak: basically, the issue right now is you need a repo object to parse under [00:12] nacc: making it more module and testable makes sense. I think it's fine to be in its own module then. [00:12] nacc: or at least outside the class but in the same module? [00:12] rbasak: but I think i can abstract all of that out to gitubuntu/changelog.py and just put tests there (before your series), then your series is just changing that class [00:12] rbasak: yeah, maybe the latter is sufficient [00:12] rbasak: just out of GitUbuntuRepository [00:12] nacc: in general I imagine we'll be moving plenty of stuff outside classes in order to make them more easily testable. [00:13] rbasak: sounds like a plan, thanks! i'll work on that tmrw [00:13] rbasak: ack [00:14] rbasak: basically, first commit will create a Changelog class (without using debian's changelog class) and just move our functionality into there, ideally with no changes -- then yours comes in and reworks that class [06:14] Good morning === JanC is now known as Guest77859 === JanC_ is now known as JanC [10:34] Hello All I have an annoying problem after I replaced my current Thawte certificate with the new letsencrypt free ssl certificate [10:35] in fact https://www.ssllabs.com/ still show me that there are 2 kinds of certificates a valid one which from letsecnrypt and the expired one from Thawte [10:35] How can I fix this please ? [10:35] Genk1more details needed [10:36] ssllabs.com server test will in no situation report two sercer certificates in use in a single report [10:36] tomreyn: I am running an ubuntu server 16.04 with apache, I have verified that there are only one vhost [10:37] tomreyn: in fact he showed me two certificates [10:37] that is unless you have those setrver certificates in the ssl trust chain [10:38] post the output on pastebin, replace anything you do not feel comfortable sharing [10:39] you can produce plain text output using https://github.com/ssllabs/ssllabs-scan [10:39] tomreyn: OK [10:39] tomreyn: Ok thanks [10:40] but normally those kind of issues is related to the webserver right ? or maybe it's a caching issue ? [10:41] yes it's a configuration issue, not a caching issue [10:41] tomreyn: but I only have two ssl directive pointing to letsecrypt path file [10:42] how it can be possible, I have only vhost and it's in front of me ? [10:42] I don't see any information about thawte [10:44] Genk1: i could not tell, and do not know your server configuration. run "sudo apache2ctl -t -D DUMP_VHOSTS" to check what you have. [10:45] I have done this before don't worry I know what I am saying :) [10:45] tomreyn: I am sure the issue is not on the server side [10:46] because some clients can access the new certificate and some others cannot [10:46] sudo rgrep -Ei 'SSL(CA)?Certificate' /etc/apache2/ [10:46] it's clearly a caching issue [10:46] do you have a proxy in front then? [10:47] indeed, sounds like a caching issue [10:47] for the ones it's not working, try it in an incognito tab [10:47] ssl content should never be cached, this would be a configuration issue [10:48] tomreyn: it shows this [10:48] SSLCertificateFile /etc/letsencrypt/live/*/fullchain.pem [10:48] SSLCertificateKeyFile /etc/letsencrypt/live/*/privkey.pem [10:48] the other directives are commented [10:48] nav-: [10:48] do you have a proxy in front then? [10:48] ok thanks [10:49] tomreyn: no no it's a simple webserver that I have hosted myself in OVH [10:49] tomreyn: I did an installation from scratch [10:50] okay so it can only be browser caching then indeed. [10:50] i always thought it was wrong they introduced that [10:51] tomreyn: voila [10:51] or it's client side proxies, which would be even worse [10:59] I'm trying to remove old 3.13 kernels and stuff on my 14.04 installation, however aptitude keeps pulling them back [10:59] Is there any way I can avoid this? [11:00] Check out the reason why aptitude is pulling them back in. [11:01] Could you provide some console output? Of removing the (old) kernel and updating, or something. [11:01] lordievader: i've removed all the packages [11:01] `aptitude remove -yq $(aptitude search --disable-columns '~i^linux-' -F '%p'|awk '/3.(11|13|16|19).0/') >/dev/null` [11:02] Afterwards, running `aptitude install` pulls the header files back in [11:02] I'm not sure how to check why it's pulling them back in, i wouldn't ask here otherwise. [11:02] Could you show the 'apt-get update' output after removing them? [11:03] update? you don't mean upgrade? [11:03] Erm, yes. 'apt-get upgrade'* [11:04] http://paste.ubuntu.com/25239340/ [11:04] apt-get isn't as strict as aptitude though. [11:04] So it seems, apt tells that it doesn't see a reason to keep those packages around. [11:05] those are different packages [11:05] not the 3.13 ones [11:05] it doesn't pull in new packages. but we're using aptitude. [11:07] What is the output of 'apt-cache rdepends linux-header-3.13.0-X' (correct the version) [11:10] lordievader: i did that, and some more: [11:10] http://paste.ubuntu.com/25239357/ [11:10] but i think the issue might be that linux-libc-dev is of version 3.13 [11:11] hmm..no, there doesn't seem to be a direct corelation [11:12] Could be, but I would expect libc-dev to have a dependency on a header package. Though it could depend on the meta package of the linux-headers. [11:12] it depends on linux-kernel-headers [11:12] or rather, it provides that [11:13] What happens when those headers are removed along with the rdepends? Just to see if apt handles this differently from aptitude, does apt also want to reinstall them? [11:13] i don't have them installed at all [11:13] not the rdepends either [11:13] i removed them as stated earlier, and apt doesn't want to install them [11:14] aptitude does, but it handles things differently. [11:14] But aptitude does? [11:14] yes [11:15] Strange. [11:15] There is not some package among the install list which could pull in the rest as a dependency? [11:16] I'm not sure, how do I verify this? [11:16] Well, I suppose you get a confirmation when running 'aptitude install' (I must admit, I rarely use aptitude). [11:17] oh that [11:17] no, not as far as i can tell [11:17] no packages are scheduled for upgrading, and there's a few packages being removed. [11:18] i can deal with using apt-get for the rest of time, but i'd prefer to use aptitude, as i've heard it should handle dependencies and such more strictly. [11:18] I thought I read a document somewhere where they recommended Debian users to use apt-get over aptitude. [11:19] lordievader: yep, that was especially for upgrading, since it didn't care that not everything was super in order [11:20] I see. Still, stricter rules should not install packages which are not needed, I'd say. [11:21] In the man page I read of a 'why/why-not' command, it says 'explains the reason that a particular package should or cannot be installed on the system'. [11:21] That may explain why it wants to install header files. [11:43] lordievader: it doesn't really expain anything unfortunately :/ [11:44] http://paste.ubuntu.com/25239552/ [11:44] The thing is that aptitude doesn't automatically install suggested packages. [12:02] Hmm, none of those suggested packages are installed? [12:22] lordievader: nope, not a single one [12:22] Then I really don't understand why aptitude wants to install it -.- [12:27] lordievader: that's my predicament. i don't understand it either. [12:28] I don't really want to say 'use apt-get' but that seems about the sanest option right now. [12:47] lordievader: i'll evaluate further options, but you might be right [12:48] thanks for your help though! [12:49] No problem ;) [13:15] tomreyn: how to deal with such problems [13:16] https://www.ssllabs.com/ssltest/analyze.html?d=www.myniu.fr&latest [13:16] 2 certificates in the same time [13:24] nacc: hi! I'm looking at php security updates....is there a round of 7.0 updates being worked on for xenial/zesty? [13:34] hi, a packaging question [13:35] if I have a package that installs a script in /etc/update-motd.d/ [13:35] because it's in /etc, it's considered configuration [13:35] and not removed via "apt remove", just "apt purge" [13:35] but it's a script, not a config file, and it's run on login. It might need the package installed in order to work properly [13:35] how is this usually sorted? [13:55] tomreyn: I fixed the issue [13:56] the problem was related to apache, some process child was still running and have the old configuration [13:56] service apache2 restart and even service apache2 stop didn't stop apache really [13:57] I needed to do a killall [13:57] after that everything was working just fine [13:57] thank you all [14:05] ahasenack: update-notifier-common drops snippets in /etc/update-motd.d where they do "[ -x /usr/lib/... ] && exec /usr/lib/..." and presumably removing the package removes the /usr/lib/... file [14:06] so the file in /etc remains, but does nothing [14:06] this one calls out to a snap, I'm not sure I should hardcode the snap bin path [14:07] with the -x check [14:07] but I can find another way [14:07] ahasenack: that would be my assumption but the update-notifier-common package was the one example I could fine [14:07] ok, thanks === ogra_ is now known as ogra === drab_ is now known as drab [15:12] mdeslaur: i noticed that debian had moved up, I can do that early next week (bump to 7.0.20) [15:13] hrm, I need 7.0.22 [15:13] mdeslaur: let me check debian again [15:13] mdeslaur: oh nm! 7.0.22 indeed [15:14] mdeslaur: should be a straightforward update on my end. Do you want me to send the stuff your way so it goes security -> updates? [15:14] yeah, stick it somewhere and I'll build it as a security update [15:15] mdeslaur: and just so i actually remember this time, you want it in the security pocket "before" the updates pocket? Or is that less relevant? (I recall you having to do two pocket copies for a prior upload) [15:16] we have two options: 1- we sru it into -updates and then I rebuild it in -security a week later, or 2- I build it as a security update, and release it to -security [15:16] if the update is straightforward without any big packaging changes, I can push it as a security update directly [15:17] ie: no dependency changes, etc. [15:17] nacc: show me the package once you've worked on it and I'll see if it can directly be released as a security update [15:18] mdeslaur: yeah, i expect no packaging changes, based upon prior uploads, but i'll let you know [15:18] ok, thanks === smoser` is now known as smoser [16:58] nacc: style file pushed. [16:58] I added some more items while I was thinking of them. Feel free to dispute :) [17:02] rbasak: thanks! === JanC_ is now known as JanC [18:34] gabrielc did this happen *after* you got chrome 60 ? [18:34] odd, that new signingkey is not announced .. [20:51] rbasak: seeing a very strange failure with src:pacemaker (a --no-fetch re-import to test my changes). One specific version of pacemaker orig is showing up with hearbeat_(version).orig.tar.gz in pristine-tar instead of pacemaker_(version).orig.tar.gz. Running `gbp import-orig` manually on the same file (based upon the logs from the importer), it creates the pacemaker orig tarball correctly. I believe [20:51] `gbp` only uses the tarball name to determine the srcpkg and I don't see anywhere that would make it see heartbeat... [21:16] nacc: file a bug I guess? I don't see right now what's going on. [21:18] rbasak: i added a bunch of debugging [21:18] rbasak: and i see what's happening but not sure why [21:18] rbasak: looks to be a gbp bug :) [21:19] gbp:info: Source package is heartbeat [21:19] rbasak: bug filed [21:20] rbasak: for the importer that is, i'll communicate with upstream once i undersatnd why :) [21:50] Can I automatically give a user a root shell when they SSH in so that they don't have to preprend "sudo" before their privileged commands or have to run "sudo -s"? [22:13] cliluw: i don't think you would generally want to do that [22:13] if you want the user to be root you could set the uid in shadow and passwd to 0, and set the home dir to match [22:13] nacc: Generally, that's correct. In this case, I do want to do this - this is for an automated script. [22:17] cliluw: note, though, your question isn't exactly coherent [22:18] if it's automated, you can set "PermitRootLogin without-password" in your /etc/ssh/sshd_config then use SSH keys to bring the user in [22:18] cliluw: 'giving a user a root shell' does not only mean "that they don't have to prepend "sudo" before their privileged commands"... [22:18] cliluw: it means all commands are privileged commands :) [22:18] it would be better to use sudo and limit what commands can be used [22:19] or set authorized_keys restrictions on what can be executed by the program, and optionally enforce that with apparmor and pam_apparmor .. [22:19] IMHO, if the manner in which ssh is used means that people are using unrestricted sudo all the time in practice, then it's fine to log in directly as root. This is against the accepted wisdom though. [22:20] It's only of value to create independent users for individual admins if sudo is restricted and/or actual auditing really happens of what they do. [22:21] Otherwise the per-admin user is just a hurdle that provides no actual security benefit. [22:22] right, I think in this case, use ssh keys, allow ssh as root with specific keys, is the best choice [22:22] (presuming cliluw's description is accurate) [22:23] cliluw: cleanest, and simplest/quick) (which for me has been at times very important) solution is to PermitRootLogin without-password, set up a passwordless ssh key on the client and use the "command" to launch a simple bash script that filters what's allowed (and potentially logs things) [22:23] security is only useful when measured against realistic threats and targeted assets [22:24] heh [22:24] drab: +1 [22:24] otherwise it's just stuff that sounds nice to the hears and looks good in a presentation [22:25] Restricting to a simple bash script that filters and logs is a good idea, but be careful about implementation. It's quite easy to leave other channels open, making it pointless if your threat model includes a malicious (or compromised) admin. [22:25] if I could only get back all the time I wasted on "best practices in a vacuum" I'd enjoy a vacation for the rest of my life :P [22:25] rbasak: agreed [22:28] nacc: Allowing root login is an option but not ideal. I would prefer that the actions of the script show up as a non-root user for better auditing. [22:28] cliluw: is the user logging in with a password? or ssh key? [22:28] nacc: Only SSH key, no password [22:28] cliluw: if the latter, you can give them passwordless sudo, since it seems like your model is trusting that specific user (and their key)? [22:29] cliluw: again, not recommended, but if you want it for auditing, that would work [22:29] cliluw: they'd need to type sudo, but they wouldn't be prompted for a password, at least [22:29] or you can use the command and wrapper and prepend sudo to all of them after confirming the cmd is allowed [22:29] cliluw: otherwise, i think, you'd need to add the user to the appropriate group or do what sarnold suggested [22:29] drab: ah true, yeah, that'd work [22:34] drab: Seems a bit tricky to write a wrapper script that will be able to filter down to only the commands I want to run since Bash is such a syntactically complex language. [22:35] cliluw: it depends what you're doing, for CI type of stuff ime it was very simple, sometimes a straight string match with a simple check on special characters like ; [22:36] but like I said, I'm not you and I don't have your exact problems, so I don't know [22:36] I've really come to believe in specificity and constraints, we all make tradeoffs all the time, whichever is best for you I don't know [22:37] rbasak: what did we decide to do about MPs that already were sponsored? https://code.launchpad.net/~ahasenack/ubuntu/+source/samba/+git/samba/+merge/326073 [22:38] drab, nacc: Thanks for your advice. I'll probably go with passwordless sudo option. It sucks to have to prepend "sudo" to every command but it's not the end of the world, I suppose. [22:41] cliluw: if you do that you might as well have a "blank" wrapped that prepend sudo [22:41] nothing to lose [22:41] wrapper* [22:42] at least it takes the suckiness out of it for you and it requires no additional investment to setup other than one line in the authorized_key file and one line in the bash script [22:42] well, two with the header :) [22:45] nacc: I've been pushing upload tags for those regardless, since they can be picked up by 1) any developer who is doing archeology or another merge, even if it wasn't incorporated into the commit graph by the importer; and b) our expected re-imports before we declare commit graph stability [23:48] how do you install on an EFI partition [23:48] the installer doesn't support it? [23:51] ikla: the installer most definitely supports it [23:51] 16.04.2 ? [23:51] yes [23:51] there is no option for EFI partition type [23:51] in manual mode [23:51] it goes into efi install if it boots as efi [23:51] are you positive the installed booted from an efi device/as efi? [23:51] iirc there's no way to manually specify "install in efi mode" [23:52] it depends upon boot time [23:52] no [23:52] you can create partitions manually [23:52] bios boot [23:52] then efi part [23:52] then all the rest [23:52] shouldn't matter if you booted from efi or legacy [23:52] boards support dual mode and could possibly boot non-efi but still have support [23:53] can I pass a kernel parameter to force the installer into efi? [23:53] cause it has no option in manual mode which is b.s. [23:53] ok, maye you're right, all I know is what I experienced, efi installs worked when I booted in efi mode, but I never tried to create an efi install when booting in bios mode [23:53] even tried creating the partitions with fdisk [23:54] I don't recall of any specific option to the kernel, no [23:54] set the correct partition types and the install in manual mode doesn't know how to handle them [23:54] but I don't do a lot of efi, only needed it once to boot from a nvme partition [23:54] yeah - I only have an NVMe device [23:55] in this system [23:55] :) [23:57] so yeah, in my case all I ddi was to create a usb install key, press f9 at boot and selected the key under the UEFI tree instead of Legacy tree [23:57] and then in manual mode I could create the efi partition and all [23:57] that's the best/most I can offer from the little I knwo about it [23:58] I'll try that [23:58] mind you, my plan ultimately failed... but that was a bios problem [23:59] even after correctly installing to the nvme device I could not subsequently boto from it [23:59] in case this info is of any use to you... I spent 4 days going back and forth with SM support about nvme bios to conclude this wasn't possible on X9s [23:59] aww