RAOF | Bah! | 04:05 |
---|---|---|
RAOF | What's the sensible way to get a mainline kernel building on the -fPIC-all-the-things default yakkety gcc? | 04:06 |
i915 | anybody know how the /proc/irq/xxx/ affinity_hint and node files work? I get the rest of the files under /proc/irq quite important for controlling what cpu get what interrupts in a multiprocessor enviorment | 06:23 |
i915 | So curious on those 2 files node sounds like some cluster thing for cpu's affinity_hint not sure what the heck that is smp_affinity ,smp_affinity_list are quite important | 06:24 |
=== cpaelzer_ is now known as cpaelzer | ||
apw | i915, smp_affinity is literally a bitmap of whihc cpus can handle a specific incoming interrupt ... _list is that in human | 06:57 |
apw | RAOF, you need to apply the fix for -fpie to the main Makefile, which is what mainline-build-one does | 07:02 |
i915 | I know this i am talking about affinity_hint and node files? | 07:02 |
apw | oh your last line wasn't very clear | 07:02 |
i915 | what are those 2 file for | 07:03 |
RAOF | apw: I found passing -fno-pic which makes it *almost* build. Where's mainline-build-one so I can steal? | 07:03 |
RAOF | Also, good morning and commiserations. | 07:03 |
RAOF | May your bees be extra comforting today. | 07:03 |
apw | RAOF, you don't want that really, you want to git log Makefile in yakkety and take the -fPIE patch there | 07:03 |
RAOF | Hm. I apparently haven't pulled yakkety in approximately 100MB worth of time. | 07:06 |
apw | i915, affinity_hint is a hint from the device driver as to where the irqs are best handled | 07:07 |
apw | i915, it appears to be set in specific drivers where that driver knows about the h/w | 07:07 |
apw | i915, node appears to be which numa node the device is associated, which i would interpret as meaning | 07:12 |
apw | i915, the nearest memory node for incoming/outgoing data | 07:12 |
RAOF | apw: I can't seem to find any relevant patch to Makefile in kernel.ubuntu.com/ubuntu/ubuntu-yakkety.git:master? That's where it's meant to be, right? | 07:13 |
apw | RAOF, UBUNTU: SAUCE: (no-up) disable -pie when gcc has it enabled by default | 07:14 |
apw | is the one i mean | 07:14 |
apw | c863674de4911d8fb7643d5dfe4e5063b052bfe5 | 07:14 |
RAOF | I have clearly messed up my git somehow; I can see that commit, but not in the history of yakkety/master. Oh well. | 07:16 |
RAOF | Thanks! | 07:16 |
apw | RAOF, master-next is always better, beacuse raisons | 07:18 |
RAOF | I'll try to remember that in future. | 07:18 |
i915 | anybody out there know much about the mei-me driver and what all this Management Engine Interface (Intel(R) MEI stuff is good still don't get what this host an Me interface is for. Seems like a remote management tool but whats the point we have so many why build it in as an LKM or something internal to the ubuntu kernel | 07:29 |
apw | i915, the linux interface to the mei is all about letting the OS configure the remote management interfaces and the like | 08:01 |
apw | i915, the specific features are per chipset though so you would have to look at the chipset docs on what all you can change | 08:01 |
i915 | ya doesn't seem like a major deal why not just remote desktop vnc over to the machine at that point or ssh over | 08:18 |
apw | management interfaces are for when the machine is broken enough that those don't work | 08:22 |
i915 | O i can kind of see in that case but 90% of the time you probably want to be physically there because the problem could be hardware related at that point or a complete shut down | 08:25 |
apw | with a management interface you don't have to be there, which machines the machine doesn't have to be where you are | 08:26 |
i915 | And what happens if the mei-me driver gets cooked then your screwed anyway unless its built into the bios | 08:26 |
apw | the driver is not for operational purposes, those are independant, it is to allow configuration before failure | 08:26 |
i915 | in which case your only cooked if the bios or computer is completely off even more rare | 08:26 |
i915 | I don''t think i completely get it then is it for remoting in to fix a problem or shutting down gracefully with certain configurations? | 08:27 |
apw | the pysical device is for analysis and fixing remote, the kernel driver is to let the host | 08:28 |
apw | OS ocnfigure the device to define its actions and abilities | 08:28 |
i915 | and what exactly is the purpose of the mei-me LKM driver if not for controlling the remote management process | 08:29 |
i915 | So its like host machine your at issues commands to the target machine running mei-me LKM which sets the actions and abillities the target machine uses in case of mei-me crashing or other really bad issues happening so the host can still remote | 08:33 |
i915 | there is got to be some fail safe with this so if the mei-me module crashes there still some default that allows one to remote manage it right. Do i have this right now? | 08:34 |
apw | i915, no you talk to those interfaces "remotely" usually over the network. the interface locally is only use to configure the local device | 08:34 |
i915 | And if i do then the management stuff is part of the bios or some fix place in memory that all computers have available for it | 08:34 |
apw | so you can cange config without rebooting into bios | 08:34 |
i915 | Still your screwed if the NIC goes down | 08:35 |
apw | its a separate thing mostly hidden from the host | 08:35 |
apw | it uses the nic directly in a more reliable way | 08:35 |
i915 | ok so this is to change bios settings without shutting down and F10/12,...etc | 08:36 |
* apw wanders off | 08:37 | |
i915 | bottom line if the NIC is down or no network connectivity then there is nothing you can do | 08:37 |
Larry__Tate | Is there any way to know when the next kernel release will occur? | 17:52 |
Larry__Tate | And if the fix for this bug will be included: https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1437492 | 17:54 |
ubot5 | Launchpad bug 1437492 in linux (Debian) "boot stalls on USB detection errors" [Unknown,New] | 17:54 |
apw | Larry__Tate, there is an SRU announce list which has dates posted | 17:54 |
Larry__Tate | Ok, any chance you can point me to where that is located? I've searched but come up empty... | 17:55 |
apw | kamal, ^ | 17:56 |
apw | Larry__Tate, that fix looks to be in 4.4.0-25.44 and later, so in this current cycle | 17:56 |
Larry__Tate | apw that is good news. | 17:58 |
kamal | Larry__Tate, apw: yes, that fix is in the kernel sitting in -proposed right now. we do expect to release this coming Monday. | 18:02 |
Larry__Tate | Whew. That is good news. Thanks, kamal, apw! | 18:03 |
Larry__Tate | Just out of curiosity, is the only way to find a release date to scroll through the entire mailing list for the kernel team? | 18:04 |
Larry__Tate | Ah, never mind guys. I just saw that there was a specific list for announcements. Thanks again. | 18:05 |
i915 | curious anybody know about this node file in /proc/irq i think it has to do with NUMA node but how is any of this different then SMP_affinity files ? | 18:26 |
=== alvesadrian is now known as adrian | ||
apw | smp_affinity is about processor affinity, numa is about memory affinity | 19:02 |
apw | it is possible to have cpuless and memoryless nodes which are separate | 19:03 |
i915 | so what does the /proc/irq/xxx/node tell you or modify for you that smp_affinity cann't? | 19:19 |
i915 | it always has 0 for me | 19:20 |
apw | the memory affinity ... which is separate to cpu affinity | 19:22 |
i915 | what what does memory affinity actually set or tell you i guess | 19:22 |
i915 | if its memory less or cpu less what does that mean for an irq doesn't make much sense to me. nodes seem like the same thing as a collection of cpu that you control with the bitmask | 19:24 |
apw | i915, a node is a collective object that contains 0 or more cpus and 0 or more memory | 19:31 |
apw | normally cards are assoicated with a numa domain, in a small machine there may well only be one (called 0) | 19:31 |
i915 | by cards what do you mean mobo , memory cards , pci expresses ...? | 19:33 |
i915 | and what does a node have to do with irq settings irq are only like hardware interrupts to a cpu so its only worth anything if cpu is associated to a node for an irq to be of any uses that and memory in the first place | 19:34 |
apw | i am talking about perpipheral cards, pci whatever. an interrupt coming in needs to be routed to a cpu and the io it performs needs memory to land in. these two affinities define where those two are | 19:41 |
apw | they will in the normal case be the same, but they may not | 19:42 |
Generated by irclog2html.py 2.7 by Marius Gedminas - find it at mg.pov.lt!