[05:23] good morning to all [06:05] Good morning [06:32] morning folks [07:43] Heya! [08:20] EoflaOE: awake? [08:21] Yes lotuspsychje [08:21] EoflaOE: got a proposal for an UWN article about making #ubuntu-bugs-announce more popular, maybe suggest to the UWN team? [08:22] EoflaOE: maybe if we mention, more users find their way up there [08:22] lotuspsychje: Yes. [08:24] my dear beloved brothers! >:) [08:24] hi marcoagpinto [08:24] hi marcola [08:24] Hi [08:25] lol [08:43] 👋 [08:44] hi lordievader [08:46] Hey EoflaOE How are you doing? [08:47] Doing fine. How about you? [09:17] Doing good here :) [09:39] ahhh... on my research for the warfare eras I learned that the ancient warfare ended with the finding of iron :) [09:41] Here's an interesting issue for you: Been seeing the reaction to an external Bluetooth mouse being stuttering movement for a few days, despite opening up the mouse several times looking for problems. A BT-connected keyboard/touchpad didn't have the same problem. The BT hub is connected internally on USB. This PC (Asus T300CHI transgormer) has a single USB3.0 mini connector on the side. This has two [09:41] connectors (USB2 mini + USB3 mini) as one so there is backward compatibility. For some time the USB3 side has been flakey because as far as I can tell physical leverage has broken the solder traces to the mainboard. I usually have a USB3 mini to USB-A adapter connected even when no USB device is plugged in. Today I had inspiration and unplugged the adapter (a passive cable) and the BT mouse instantly [09:41] behaved properly. So.. physical damage on the edge connector causes internal spurious USB interference. [09:42] guys?! Has the Kernel issue been fixed? [09:42] :) [09:42] so that I can update my 19.04? [09:43] marcoagpinto: The kernel issue? [09:43] V 15 something [09:43] :) [09:44] What? [09:44] that people were complaining a week or two ago [09:44] :) [09:44] 5.0.15 [09:44] Buaaaaaaaaa... I can't remember the version... too much pressure [09:44] This really tells me nothing. Link to a bugreport? [09:45] lordievader: people were complaining that if would freeze Ubuntu [09:45] it* [09:45] or cause flickering in the screen [09:45] lotuspsychje: Do you know what bug marcoagpinto is talking about, and if it has been fixed? [09:46] I know lotuspsychje had flickering on a Clevo [09:48] Yeah, I remember him talking about that. [09:57] I have the same graphics card in this HP and no flickering with the same kernel [10:17] what is this "waiting for unattended-upg to exit"? [10:17] it takes 10 minutes to disappear? [10:17] on software updater [10:41] Unattended upgrades busy with performing updates? [10:45] lordievader jeremy31 tomreyn found a workaround for my flickering bug 5.0 see https://bugs.launchpad.net/ubuntu/+source/linux-hwe/+bug/1838644 [10:45] Ubuntu bug 1838644 in linux-hwe (Ubuntu) "Booting into desktop results in flickering" [Undecided,Confirmed] [10:46] marcoagpinto: ^ [10:57] lordievader: yes, exactly [10:57] :) [10:57] sorry... I went to the store to buy cola... I ran out of stock [11:22] lotuspsychje: I've added some info to your flicker bug report [11:23] lets c [11:24] tnx TJ- you want me to try a 4.19 mainline then? [11:24] 4.18 is confirmed working [11:24] lotuspsychje: it would help yes, since if you can confirm the issue is between the 2 a git bisect across those commits would not need a lot of steps [11:25] TJ-: allrighty lets grab a 4.19 [11:26]  git bisect start v4.19 v4.18 -- drivers/gpu/drm/i915 [11:26] Bisecting: 273 revisions left to test after this (roughly 8 steps) [11:27] TJ-: alot of folder on the 4.19 series, wich one to start? [11:28] lotuspsychje: v4.19 -- the first [11:28] https://kernel.ubuntu.com/~kernel-ppa/mainline/v4.19-rc1/ this? [11:29] no, that's a release candidate. https://kernel.ubuntu.com/~kernel-ppa/mainline/v4.9/ [11:29] kk [11:29] oops, not that one !! [11:29] that's 4.9 noto 4.19 :D [11:29] https://kernel.ubuntu.com/~kernel-ppa/mainline/v4.19/ [11:29] on it [11:30] lotuspsychje: the upstream bug suggests it affects 4.19 onwards so if you can confirm that we're much closer to finding the cause [11:31] i can still use the i915.fastboot=0 bootline right [11:35] ok reboot test [11:38] TJ-: 4.19 also flickering, i presume thats what you wanted to prove right? [12:11] lotuspsychje: fab, yes :) [12:12] lotuspsychje: do you have a local (powerful) amd64 system to do kernel builds on? It'd make it much easier and faster to do these tests since you could do your own bisect and kernel builds [12:13] Howdy folks [12:13] TJ-: intel i5 here, and my nuc i7 [12:14] lotuspsychje: the i5 would be a good one... after the first complete build (which will take some time) later builds will be incremental so should be much quicker [12:14] TJ-: is it 4.19 bisect time, thats what you suggest? [12:15] lotuspsychje: bisect between v4.19 (bad) and v4.18 (good) just using the drivers/gpu/drm/i915 path [12:16] TJ-: is that intel-drm daily builds you mean for 4.19 specific then? [12:20] What is "bisect"? [12:20] EoflaOE: compare the source code between versions [12:20] :) [12:20] to try to find out what introduced the issue [12:21] Thanks marcoagpinto [12:21] >:) [12:21] I am just a little demon [12:23] lotuspsychje: no, it isn't daily, it means building the kernel as it exists between 4.18 and 4.19 using binary section (bisect) which divides the distance between the 'good' and 'bad' commits by 2 each time [12:24] TJ-: so you want me to find the weak spot in the 4.18 series? [12:24] 4.19 sorry [12:24] From my earlier git-bisect report it expects there to be 8 steps (builds) to find the problem commit [12:24] lotuspsychje: correct [12:26] TJ-: https://kernel.ubuntu.com/~kernel-ppa/mainline/v4.19.5/ to start with? [12:26] It said "Bisecting: 273 revisions left to test after this (roughly 8 steps)" ... so that means 273 /2=136 /2=67 /2=34 /2=17 /2=8 /2=4 =/2=2 /2=1 [12:27] not sure i follow that [12:27] lotuspsychje: no, the first 'bad' is v4.19 so it'll work backwards from that tag to v4.18, passing through only the commits to the drivers/gpu/drm/i915/ changes [12:28] so its the 4.18 series to bisect then [12:29] lotuspsychje: the count is the number of commits affecting the drivers/gpu/drm/i915/ path ... binary section means take the middle commit between 'good' and 'bad' and test it. If that is 'bad' then you've halved the number of commits to test so you build the next test from a commit half way between 'good' and this new 'bad' etc [12:29] lotuspsychje: no, it is the commits added between v4.18 and v4.19, this has nothing to do with the stable releases (v4.18.X are stable releases) [12:30] yeah but i dont get those numbers, can you gimme examples of kernels i should test up or down [12:30] lotuspsychje: those added commits will have arrived as soon as the v4.19 development cycle started and would be contained in the various v4.19-rcX points [12:31] lotuspsychje: OK, let's go back to (git) basics. [12:34] Every kernel change is self-contained as a small patch that affects only the files that relate to the change being made. We call that change a "commit". Every commit has a unique SHA hash which is the commit ID. Every so often the project (in this case Linux Torvalds) decides to make a test 'release' so a commit is 'tagged'. A tag is the version numbers you're more familiar with, e.g. v4.19rc1, [12:34] v4.19rc2, ... maybe to v4.19rc7 ... when Linus is happy there are no regressions he'll make a full release which is tagged as v4.19 ... and then start the 4.20 release cycle. [12:34] yeah i get every kernel version has commits added [12:35] In the 4.20 release cycle there may come fixes for bugs and regression reported against v4.19 ... those are added to a separate git repository, the stable tree, maintained by Gregg Kroah-Hartman. When Gregg makes releases he adds a number to the release so you'll see v4.19.1, v4.19.2, v4.19.3 ... and so on. Those are where Ubuntu kernel will pull most if its fixes from [12:36] yeah nocticed those [12:36] So in your specific case you know the v4.18 release works fine but the v4.19 release is bad. We also deduce in your case the problem is most likely in the i915 driver itself, so we can limit where we look for the bug to the drivers/gpu/drm/i915/ path in the kernel source. [12:37] So when we ask git-bisect to get to work it counts all the commits on that path between the two tags v4.18 and v4.19, which is 273 [12:37] ahh i see [12:37] 273 versions to test in between [12:37] then it it picks the commit half-way between, which is ~136 commits after v4.18 and builds the kernel there [12:38] you test that build and tell git-bisect it was either 'good' or 'bad' [12:39] If it was bad git-bisect then knows the problem is between v4.18 and 136 commits later, so it picks the commit half-way between (~ 67) and you build again. [12:39] TJ-: but im not a dev, i know nothing on commits, only to install kernel versions [12:39] so you tell me wich kernel version is on halfway 136 commits? [12:39] lotuspsychje: right, but this process only involves issuing simple git-bisect good/bad commands after each test, executing the kernel build instruction and waiting for the kernel to be ready, then installing and testing it. [12:40] git bisect automatically checks out the commit to be tested and then you issue the kernel build command and sit back until it is ready [12:40] git-bisect is something else then kernel bisect? [12:41] lotuspsychje: instead of you telling seth and him telling git-bisect and then waiting for seth to build the kernel, you do that part yourself which means you aren't waiting on someone else [12:41] lotuspsychje: git-bisect is the tool used to do kernel bisecting [12:41] right [12:42] that's what devs mean when they talk about kernel bisect, they're using git-bisect's automated workflow which just involves telling git-bisect which are the good and bad commits/tags and then it checks out the commit 1/2 way beteen those and you trigger a build, test, report good or bad and go around again :) [12:42] and as I said, git-bisect indicates you'll only need a max of 8 builds to find the bug [12:43] TJ-: yeah i noticed the dev helping me was building stuff on the 5.0 series, wich he presumed the commits [12:44] but not sure i wanna start playing with git bisect lol [12:44] lotuspsychje: I promise you it is simple :) here's the man-page... all you'd use is the 'start' 'good' and 'bad' options [12:44] http://manpages.ubuntu.com/manpages/bionic/en/man1/git-bisect.1.html [12:45] can you give me an example on one kernel [12:45] lotuspsychje: how do you mean 'on one kernel' ? [12:46] well we want to test on the 4.18 and 4.19 series specificly right? [12:47] does this test the kernel you booted on? [12:47] The kernel is ONE tree of code with continuously added changes (commits). Some of those commits get 'tagged' with version numbers is all. There are NOT separate 'kernels' or 'trees' for each release version. [12:49] TJ-: give me an example of git-bisect ill see if i udnerstand [12:50] It goes like this: 1) use git to clone the kernel source-code locally 2) install the tools required to build the kernel 3) use git-bisect to indicate the known 'good' and 'bad' commits (tags) 4) git-bisect will 'check-out' the source tree at the midddle commit 5) build the kernel from source to binary 6) install the built kernel 7) test 8) tell git-bisect if it was 'good' or 'bad' 9) repeat from 5 [12:51] lotuspsychje: that man-page contains an example workflow using bisect on the kernel , you'll see there it does "git bisect good v2.6.13-rc2" [12:54] TJ-: tnx for the explanation but im not gonna start mess with it [12:55] this is the work devs need to do [12:55] that's a shame, you'd have the exact commit that broke it in a few hours [12:55] they know wich kernels work and doesnt now.. [12:56] They've got do all I've described and you've got to test every build anyhow to discover the bad commit [12:56] Especially as this only seems to affect that model/BIOS [12:57] i wanna help kernel testing but not gonna start building them [12:58] Building kernels is not that complicated... [12:58] maybe, but the devs are payed to do this [12:59] and im not really pleased this happened on lts.. [12:59] knowing this works on several other kernels versions [13:00] Not enough qa testers on real and different hardware? [13:00] maybe lordievader but if 5.2 and 5.3 work nicely, the commits used there are known to work [13:01] they 'should' know where the cuplrit is? [13:01] Back when I still did qa testing I mainly used virtual machines since that way you could test multiple targets at the same time. [13:01] But you miss these kind of things that show with particular hardware combinations. [13:02] you mean that only i (on this machine) can really find the right 'bad' commit that causing it? [13:02] lotuspsychje: you have to consider those 'devs' have greater priorities than this issue since it only affects a small number of models/BIOS, whereas you have a greater interest in solving it. Being able to tell the devs X commit is the problem might save several weeks of back and forth [13:03] ok fair enough [13:03] lotuspsychje: and the other point is the dev's may be paid but not specifically for this kind of issue, they're paid to solve issues for paying customers mostly... if that helps the communnity users its a benefit of course [13:06] TJ-: is it possible if this bug isnt traced right, this will happen to next ubuntu versions? [13:06] If they are payed at all. [13:08] lotuspsychje: of course, moreso because the Intel i915 dev that looked at the upstream bug in March/April hasn't responded about it since [13:10] right.. === jelly-home is now known as jelly [14:33] TJ-: could you doublecheck this plz: https://bbs.archlinux.org/viewtopic.php?id=244115 [14:44] lotuspsychje: that commit (a99790bf5c7f3d68d8b01e015d3212a98ee7bd57) suggests the affect is wider than just video [14:44] but they also mentioning the flickering [14:45] and my wifi: [14:45] from your dmesg [14:45] description: Wireless interface [14:45] product: Wireless-AC 9260 [14:45] [ 0.238658] ACPI FADT declares the system doesn't support PCIe ASPM, so disable it [14:46] [ 0.369195] acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI] [14:46] [ 0.370267] acpi PNP0A08:00: FADT indicates ASPM is unsupported, using BIOS configuration [14:46] [ 0.378300] pci 0000:00:1c.3: ASPM: current common clock configuration is broken, reconfiguring [14:46] [ 2.409840] r8169 0000:01:00.1: can't disable ASPM; OS doesn't have ASPM control [14:47] So, all that suggests that despite the ACPI FADT saying there is no PCIe ASPM, the BIOS has enabled it anyhow [14:47] But Linux cannot control it [20:56] lolz .. i knew something was comming [20:56] redmine server issues === kallesbar_ is now known as kallesbar