=== JanC is now known as Guest50625 === JanC_ is now known as JanC [00:41] wgrant,mwhudson: 0 5,11,17,23 * * * [00:42] yummy cron syntax [00:42] every 6 hours, at a wild guess one hour offset from when gina runs? [00:43] gina is 05 4,10,16,22 * * * [00:43] so close enough, yes [00:45] mwhudson: Is 42 */3 * * * any more fun? :3 [00:45] Unit193: no it's all terrible [00:45] apart from @daily, @hourly etc i can remember what those mean :) [00:46] Or @reboot in that case. === juliank is now known as Guest27344 === juliank_ is now known as juliank [01:42] Does anyone know how to send a drag-n-drop event to another application programatically? [01:43] I'm trying to send multiple *.kml files to google earth to be displayed on the same map. But when I use g_app_info_launch_default_for_uri() the 2nd and subsequent files are rejected and google earth emits an error saying that an instance is already open. [01:45] that's not DnD; seems like a bug in google earth that it doesn't support taking multiple kml files as command line args [01:47] dobey - well the reason I am looking into DnD is because I can drag and drop additional *.kml files into the google earth window [01:48] So I thought maybe my application can somehow send DnD events to a running instance of Google Earth to get it to do what I want. [01:48] (rather than g_app_info_launch_default_for_uri) [01:49] well i'm not sure what you're trying to do exactly. why are you using g_app_info_launch_for_uri()? [01:50] That's how I send the first *.kml file to Google Earth. It launches a new window and the KML is displayed properly. [01:50] what happens if you run "google-earth foo.kml bar.kml" ? [01:51] faking DnD is not the right answer here. it's certainly possible, but quite ugly and would be very disturbing for any user of your app [01:52] Hmmm, it says cann't open file for reading. [01:52] well, where foo.kml and bar.kml are two of the URIs you're trying to pass to google earth [01:53] and google-earth is whatever the actual command for running google earth is [01:53] but i don't think you want to run g_app_info_launch() anyway [01:53] I hadn't tried opening a *.kml directly from a terminal prompt. It doesn't work that way. [01:53] what doesn't work that way? [01:54] It only seems to work with g_app_info_launch_default_for_uri(). [01:54] When I pass the *.kml on the command line Google Earth launches but then displays a dialog saying it can't open the file for reading. [01:55] Just running "google-earth foo.kml" fails every time. [01:55] well, what does "xdg-open foo.kml" do then? [01:55] Yes, that works. [01:56] and what happens if you do that while google earth is running? [01:57] That tries to launch a new instance of Google Earth which outputs an error about another instance running - then the new instance crashes. [01:57] "Google Earth appears to be running already. Please kill the existing process, or delete /home/dan/.googleearth/instance-running-lock if this is an error." [01:58] so then google earth is broken and doesn't support any rpc to tell it to open another file i guess [01:59] sounds like it.... but what about my DnD idea? [02:01] I don't think Google cares about Linux so I suppose I need to find some hack to work around this. [02:02] well i don't know what you're app is doing exactly, but i would suggest not relying on google earth being installed [02:03] Well, it might be too much work to write a Google Earth clone from scratch! [02:04] And how is it you don't know what my app is doing exactly? I said what it is trying to do. Load multiple *.kml files into Google Earth and display them simultaneously. Do you know what a KML is? [02:04] i know what a kml is. i don't know why you're writing an app to load kml files into google earth, when google earth already internally has support for loading kml files [02:04] Also, why do you keep saying I shouldn't use g_app_info_launch_default_for_uri() for this? That seems like exactly what this API is for. [02:05] dobey, my app generates the KML files. I want to display them without writing Google Earth from scratch. [02:05] because you want to pass multilpe URIs, not run N instances of google earth (or whatever mapping app might be installed) === yuning-afk is now known as yuning [02:06] i presume you are using gtk+ or gnome libs? [02:06] Well, I want to add these *.kml files at different times. Not all at once. [02:06] Yes, I am using GTK+ which of course gives me access to glib APIs. [02:07] you might want to look at using libchamplain, which provides a map widget, and i think has support for loading kml data [02:08] That sounds like it will be a degraded user experience. I've already got this working in Windows and OSX. I'm just trying to port my application to Linux. [02:09] I thought this would be easy since I am using GTK+. Boy was I wrong. [02:09] (this issue is one of many!) [02:09] ...but it seems to be the last roadblock [02:14] Why do you think DnD is a bad idea? It seems to be my only hope. [02:15] because having the mouse move randomly across the screen is never good [02:16] and becasue it only increases the dependency on google earth existing [02:16] I wasn't suggesting taking control of the mouse. Surely there is an ABI for this. [02:16] how else would DnD work? [02:17] How does the file manager send the DnD to the open application? It doesn't need a user and a mouse to do this. [02:17] you have to initiate a drag, move the mouse to the other window, and then drop [02:18] i don't think what you are calling DnD, is DnD [02:19] DnD is just a form of interprocess communication. The details of mouse pointers and mouse clicks is just a user input method that directs the application to what the user wants. Then the application initiates the actual DnD somehow programmatically. [02:19] Presumably the window manager plays a part in all this. [02:20] DnD is not "just a form of IPC" [02:21] My application accepts DnD events so I am not ignorant to how it works. I need to send one rather than receive one. [02:27] Does anyone else have anything to add? I don't think dobey can help with this. [04:10] raymod2: It *is* possible to send DnD events programattically, but this means you'll need to be implementing (one or more of) the (three) X11 DnD protocols. [06:35] anyone using apt-mirror to mirror trusty? it's giving weird errors here.. [06:42] looks like the skel-directory was corrupt [06:47] tjaalton: can you pastebin something? there were problems with the archive mirrors ~38 hours that might have caused trouble on downstream mirrors.. that problem should have been resolved 'everywhere' by now though [06:49] sarnold: nah this was there for some time now, I'll let it sync the mirror before checking again. at least the skel-directory looks better now [06:49] skel-directory? [06:49] I think it's a local cache for apt-mirror [06:49] tjaalton: btw, skel-directory? my (not atp-mirror based mirror) doesn't have anything matching '*skel*dir*' ..) [06:49] ahh [06:49] Ah [06:49] Yeah, not an actual thing. [06:49] once it's in a consistent state the files are copied to the proper place [06:50] one thing I noticed though is that we're not generating Contents files like debian? [06:50] Debian moved their Contents files a while ago, and we haven't followed suit. [06:50] right [06:50] ok [06:50] is it planned? [06:50] Specifically, Debian's are now per-component. [06:51] We have no specific plans to follow that change. [06:51] But we're also not explicitly not. [06:51] okay [06:51] There's just no pressing need to do that work, as far as we can tell. [06:52] what about doing .xz files? [06:53] Packages [06:53] xenial added xz and dropped bz2. [06:53] http://archive.ubuntu.com/ubuntu/dists/xenial/main/binary-amd64/ [06:53] ah, indeed [06:58] slangasek, what's the status of https://launchpad.net/ubuntu/+source/command-not-found/0.3ubuntu16.04.1 ? there is no SRU bug so it looks like nobody in the SRU team is going to copy that over to updates, should a new upload be done? or the SRU team convinced to copy it without associated bug? === athairus is now known as afkthairus [10:16] If I need to force an update of a package (kexec-tools) when new kdump-tools is installed, is a versionned Depends: sufficient or do I also need to add a Breaks: ? [10:25] caribou: a versioned Breaks is sufficient I think, if a newer version of kexec-tools is available. A Depends would additionally force both packages to be installed. [10:26] rbasak: I'm currently using only a versionned depends which works correctly [10:26] rbasak: just curious to know if the Breaks is better [10:28] I don't think you need the Breaks, at least not for that specific requirement. [10:28] Are you familiar with https://wiki.debian.org/PackageTransition? It should cover most cases. [10:34] rbasak: that's what I tought also [10:34] rbasak: I think I read this one recently; I went through so much of the Debian documentation lately that I'm a bit lost in what I read or not [10:37] rbasak: thanks; I read part of it earlier but now it outlines another change in kexec-tools that needs to go in. [10:37] rbasak: so you are correct; according to that, only a Depends: is sufficient === hikiko is now known as hikiko|ln === hikiko|ln is now known as hikiko [12:28] can anyone think why there might be a clock skew from the host on a adt-virt-qemu created vm? [12:40] for some reason it seems to be a few hours behind the host time === mterry_ is now known as mterry [13:27] Is it good practice to use distribution-specific variables (like $dist.Depends} in debian/control to avoid delta b/w Ubuntu and Debian ? [13:28] I have one kexec-tools dependancy that is specific to an Ubuntu delta that I'm getting rid of, but I'd like to avoid a delta in the depending package [13:29] sorry for all those questions, but I _really_ want weed out as much delta as possible b/w kdump-tools & kexec-tools === bladernr` is now known as bladernr [13:42] I'm trying to add an upstart job (on trusty) between networking coming up and cloud-init running; cloud-init-nonet has "stop on static-network-up" and cloud-init has "start on ... and stopped cloud-init-nonet"; if I set "start on stopping cloud-init-nonet" will my job block cloud-init running? [13:42] smoser: (You might be able to help me with ^) [13:43] probably not [13:43] smoser: Do you know of a way of doing it? [13:45] looking [13:46] xnox, ^ ? [13:46] xnox is on a boat this week, I believe. [13:46] hm. [13:48] Odd_Bloke, starting should work [13:48] (man starting) [13:48] init(8) will wait for all services started by this event to be running, [13:48] and maybe stopping too [13:48] as it has similar text. [13:49] so yes, i think you're right [13:49] i'd give it a try with a task that does sleep 90 [13:49] OK, I'll try putting a sleep in and see if things block as... that. [13:49] ^_^ [13:50] smoser: Ah, http://upstart.ubuntu.com/cookbook/#task also has some relevant text. === mnepton is now known as mneptok [13:55] smoser: Yep, "start on starting cloud-init" and "task" blocks cloud-init from starting until the "script" block is done. Thanks for the help! [13:59] smoser: (And without "task" it doesn't block) [14:01] you could have probalby done it without task and a pre-start [14:01] but task is probably right [14:01] if it is somethign that shoudl run to completion and be done [14:01] like sleep 90 [14:01] :) === nacc_ is now known as nacc [15:21] slangasek: general question for you -- historically (and currently), do syncs occur from any release of Debian or primarily current testing, unstable and experimental? How exactly does experimental work? Are publishes in experimental considered part of the history of unstable (but a place to 'test' things for unstable?) I ask because the changelog entries for releases in each refer to the other series, [15:22] so I'm trying to decide how best to handle them in the importer [15:22] sigh, /me googles and maybe finds the relevant answers [15:35] nacc: autosyncs happen from unstable. Manual syncs can happen from any of the three. [15:35] infinity: got it [15:36] nacc: A manual sync from experimental, however, will *not* track experimental, so manual syncing is required from thereon until unstable has a higher version. [15:36] (Perhaps a misfeature, but one we've lived with for a decade) [15:36] infinity: ah sure, that makes some sense, at least [15:36] infinity: luckily i'm not too worried about the process, i'm just trying to recreate the history in git :) [15:37] infinity: and our 'parent verification' doesn't work with debian's process yet, so i'm just trying to get my head around it [15:37] nacc: As for the Debian history of a package, that's muddier, as it's maintainer dependant. Some people scrub experimental from history when uploading to unstable, some people don't. [15:37] infinity: yeah, that's what i'm noticing :) [15:38] infinity: up until now, we were only importing 'sid' explicitly ... but that led to us not finding syncs directly from experimental, if they never got pulled into sid at any time, so i'm adding experimental as a first step, but i'm realizing they are more intertwined than the ubuntu notion of series, so our existing code doens't dtrt [15:38] nacc: And to make it more exciting, some people will have revisions in there that have never existed in the Debian archive. So, attempting to create a commit/tag for that revision won't work out too well. :P [15:39] infinity: yeah we've seen that a few times already :) [15:39] infinity: we only go of spph right now for creating tags [15:39] infinity: we just do some sanity checks off the changelog entries [15:41] nacc: By any chance, are you working on version control for all packages in the archive, with imports from Debian? [15:42] I suspect he's only working on a small subset he cares about. [15:42] Unless he's decided to do LP dev work and is actually making git package imports a thing for everyone. [15:43] rlaager: it works for any package [15:43] infinity: starting small, but there's no reason it can't be used by others :) [15:43] there are lots of corner cases :) [15:43] when I was doing version tracking in debbugs I decided that the correct thing to do was to infer the history from changelogs [15:43] nacc: You might want to poke cjwatson/wgrant and see if this overlaps with any future plans they had. [15:44] infinity: there are probably better ways to this fully w/in LP, my tool just takes the spph and applies an algorithm rbasak has devised [15:44] infinity: ack, will do! [15:44] cjwatson: That works better for debbugs than it does for attempting to actually break uploads into git commits, though. [15:44] it might be a useful component of what we need to do; there are several other pieces [15:44] infinity: the git case is more complicated, but knowing which versions are parented to which other versions is still useful, I think [15:45] cjwatson: yeah, we are treating the spph as "authoritative" and orphan'ing when things don't match that [15:45] cjwatson: so everything will get tag, but not everythign will have a full 'git' history necessarily [15:45] the spph doesn't tell you the parent version, unless you parse the changelog [15:45] cjwatson: right, in our algorithm, 'parent' is the previously published version [15:45] cjwatson: we verify it against chagnelog, and handle some specific cases where they don't agree [15:45] mm, which is sort of OK for a single distribution but doesn't deal with merges excellently [15:46] cjwatson: rbasak is probably better at explaining the thoughts behind the algorithm, but let me share the doc with you [15:47] * rbasak is otp [15:47] nacc: ok, I don't really have time for an extended discussion right now :) [15:47] was just drive-bying [15:47] cjwatson: np! i'm not doing it justice [15:48] cjwatson: i haven't hit a merge yet where the algorithm gets confused, but i've only tested a small set of packages [15:48] tbh, the issues we've hit so far have all been with just the normal history being ... unexpected :) [15:52] rbasak: if/when you are off the phone, would appreciate a minute of your time [15:52] ack [15:57] nacc: syncs can happen from any Debian suite, but autosyncs all happen from a single place (unstable, with I think only one historical exception for an LTS). Syncs from experimental can be done individually (as we did last cycle for a few php packages). Whether it's part of the "history" of unstable or not is entirely package-dependent; you can infer some of this from debian/changelog [15:58] slangasek: thanks [15:58] seb128: command-not-found> gah, sorry, seems this fell off my radar. publishing it now (Friday or not) [15:58] slangasek, thanks [16:03] Unfortunately debian/changelog misses history some times, and I'd prefer the git history to show what was actually published and available to users rather than what the changelog shows. [16:05] That makes it a little difficult to decide what the parent of a particular upload is. I think there's a fundamental dissonance between how the archive is organised and what I think git history should show. So I think it's necessary to do some heuristics to do a reasonable job in a bunch of common edge cases. [16:05] nacc: ping. Hangout? [16:12] rbasak: we're talking about a separate branch (unstable vs. experimental); unless you have an actual git repo, you can't infer anything about the merge topology /except/ by looking at debian/changelog [16:12] slangasek: for an Ubuntu merge, we're doing something a little odd, but I think it works. [16:12] hmm [16:12] A merge ends up with two parents. One of them is the claimed parent in debian/changelog [16:13] The other is what was previously in the pocket. [16:13] I think. [16:13] yeah [16:13] right, that doesn't sound odd to me at all, that sounds correct ;) [16:13] and consistent with the prior art on this in UDD [16:13] But except for the merge case, we're picking one parent. [16:14] I favour picking the one in LP publishing history, because that's what users saw. It'd be confusing for it not to match I think, for example when working out why something happened from the history. [16:15] Even though sometimes for example an Ubuntu uploader will not leave a deleted SRU proposed verification-failed entry in the changelog in an upload that supsersedes it. [16:15] not sure I understand, "except for the merge case" [16:15] Let me take an example. In sid, the maintainer uploads version A, there is an NMU of A.1, and then the maintainer uploads B. [16:15] Sometimes when he uploads B, A.1 isn't in debian/changelog. [16:16] If we rely on debian/changelog to determine the parent, then A.1 will be tagged but otherwise unreachable, and not in our imported history. [16:16] (of the sid branch tip) [16:17] er, right, yes [16:17] in that case it's a single branch (unstable) and the history should match the upload history [16:17] OK. I think we agree :) [16:17] but unstable vs. experimental are different branches, just as unstable and Ubuntu devel series are different branches, and for *that*, you only have the changelog to hint you about the merge topology [16:19] But, experimental could jump ahead. An experimental upload could be based on a newer unstable, or on the thing that was previously in unstable. [16:19] rbasak: https://launchpad.net/debian/+source/exim4/+publishinghistory?batch=75&memo=150&start=150 is the one i'm looking at now [19:23] Laney: can you prod dep11-generator for scribus/xenial-proposed? /cc ximion === afkthairus is now known as athairus [19:25] it really is time that I give Laney his immutable suites, so he doesn't have an excuse other than ENOTIME to switch to asgen... [19:27] * mapreri -eparse ximion's message, but guess it was not address to him anyway [19:31] appstream-generator is the new thing which doesn't have some of the flaws dep11gen has [20:45] when building debian packages is there a way to check for unpackaged files under debian/tmp? [20:46] dh_install --fail-missing is the usual approach [20:57] cjwatson: isnt fail missing the default and it only fails if you specified the file in *.install? i want to have the same behaviour as rpm which is something like unpackaged files [21:03] cjwatson: thanks i figured out i need to override dh_install [21:06] --fail-missing isn't the default no [21:07] Not even --list-missing.. [21:08] files listed in *.install which don't exist in the install path, do fail by default, but that is not the same as --fail-missing [21:33] thanks === DropItLikeItsHot is now known as AfroThundr