[01:14] blr: Landed the getAdministratedTeams timeout fix, so hopefully qastaging will be a bit happier soon. [01:42] wgrant: ah great [03:00] I build package of glibc on my launchpad server,then I have published it ,but the state of glibc is still Uploading ,why? [03:01] maozhou: What do you mean by "published it"? [03:02] It can't be published if it hasn't been uploaded yet. [03:04] I run "echo "123123" |sudo -S scripts/process-upload.py -vvv --builds -C buildd /var/tmp/builddmaster" [03:04] LP_DISABLE_SSL_CERTIFICATE_VALIDATION=1 /home/kylin/ubuntu-archive-tools/queue -l dev -s utopic accept [03:04] scripts/process-accepted.py -vv ubuntu [03:04] scripts/publish-distro.py -vv [03:04] scripts/publish-distro.py -C [03:05] maozhou: What's the 'echo "123123"'? [03:05] process-upload.py does not read from stdin. [03:06] the password of the user who run the script [03:07] That's certainly unusual. [03:07] You'll need to check the logs from process-upload.py to see why your build was not uploaded. [03:08] on another launchpad server , I do it by same way ,the state of the pachage is successful [03:09] where is the logs from process-upload.py? [03:09] stderr [03:35] wgrant: the log is http://imgur.com/mVLK4PE http://imgur.com/25WeYTn http://imgur.com/B8RcZg2 , but I can't find any message about "glibc" [03:39] maozhou: You seem to have a lot of warnings and errors there. I'd be working those out before trying to understand why another upload hasn't appeared. [03:40] But considering searching /var/tmp/builddmaster for a directory name ending in PACKAGEBUILD-$BUILDID, where $BUILDID is the ID of the build that is stuck Uploading. [03:42] And I find a pachage which bigger than 256M, can't be uploaded succeed [03:42] What goes wrong when you try? [03:45] If a package builded succeed on my launchpad, if the binary package of it is bigger than 256M, when run process-upload.py , It's error. [03:45] Error? [03:47] We regularly deal with packages larger than a gigabyte without a problem. [03:48] It's conceivable that there may be a problem with doing it on a 32-bit system, but I've not seen anything like that. [03:50] yes, the OS is 12.04 32bit [03:51] Ah, that's inadvisable for a production deployment. [03:51] I'd expect it to work, but it's not thoroughly tested at scale. [03:54] the launchpad server should be deployed on 64bit OS, right? [03:55] maozhou: All of our production systems are running on amd64, yes. [03:55] maozhou: The test suite works fine on i386, but it doesn't test large datasets. [03:58] ok, I konw [03:59] I know [07:23] wgrant: My tentative plan for dose-builddebcheck was to see if it could be run on the builder side [07:30] cjwatson: What benefit does that give us? [07:31] cjwatson: AFAICS it just makes it less likely that we can autoretry depwaits. [07:31] Actual accurate build-dep analysis rather than guesswork [07:31] s/build-dep/dep-wait/ [07:31] The case at hand in this bug could dep-wait rather than fail. [07:31] In the current environment we need to be quite conservative with depwaits. [07:32] Though it's already imperfect because of virtual packages, I guess. [07:32] One thing I was thinking of was that we could try sbuild, and if it fails to install build-deps we could then run dose-builddebcheck to find out why. [07:32] That is, we should avoid creating depwaits that can't be automatically retried when the dep becomes available. [07:33] Returning a deeper dep makes that more likely, as one of the deps in between could change such that the dep still doesn't exist, but the package's build-deps are now satisfiable. [07:33] In which case you'll get a new answer from d-b [07:33] Only if we run it again. [07:34] On what condition do we trigger it to rerun? [07:34] There are a few depwait builds in PPAs and such. [07:34] On a new build ... [07:34] Right, that means we lose depwait autoretry. [07:34] Then what value does depwait provide? [07:34] Um? No we don't [07:35] foo depends on bar [07:35] bar depends on baz, which doesn't exist. [07:35] baz isn't meant to exist; it's a bug in bar. [07:35] New bar is uploaded, not depending on baz any more. [07:35] I'm proposing having launchpad-buildd run it if sbuild reports a missing dep [07:35] foo's build is depwait on baz, and will never be autoretried. [07:35] As a replacement for the regexes [07:36] Ah, right, yes, that's a problem. [07:36] Currently in that case we fail. [07:36] And everyone knows a failure is sinister. [07:36] Though, the way that d-b works means that we would have control over how deep a dep-wait we choose to accept. [07:36] (a depwait may be sinister in the virtual case, but non-existent virtual packages are conveniently rare) [07:39] wgrant: is it ever dexter? [07:40] lifeless: Most of the time :) [07:45] Hm, but d-b doesn't tell us the full chain, I clearly misremembered that [07:45] So not as workable as I thought :( [07:45] d-b is useful, I think, if you can run it beforehand and then run it regularly until it succeeds. [07:46] Right, as you say that's difficult with lots of archives and bootstrap etc. though. [07:47] Although the bootstrap archive could be considered to be under manual control only, I suppose. [07:49] There are two cases there: port bringup and bootstrapping cycles in normal operation. In the latter you normally have a specific sequence of packages in mind and don't want other things to automatically pop up. [07:49] Not sure about the former. [07:57] Yeah. [08:00] If we can ignore that case then it becomes "just" a performance problem of grabbing all the Packages/Sources files to alphecca for any current dep-wait build and analysing the lot. [08:00] And stopping trying to figure out the dep-wait target from sbuild output. [08:01] The quotes are the reason I haven't tried it. [08:01] I should see how long it would actually take. [08:05] Unfortunately we can't exactly optimise it like we should publish-distro. [08:05] That is, skip archives that haven't changed. [08:05] Indeed. [08:05] Because people occasionally do upload to the primary archive, and a few things depend on it. [09:12] I dputed a package to my launchpad server, and accepted it , but the queue is empty, why? [09:22] maozhou: Why would it be in the queue after you'd accepted it from the queue? [09:24] the build queue is empty, there are no build job after accepted the package. [09:25] maozhou: What did process-upload say about it? [09:31] it say "Finished checking upload" and "Setting it to ACCEPTED" [09:32] Did it say it created any builds? [09:32] Should it have created any builds? [09:32] Have builds previously worked on that series on that instance? [09:35] NO , it haven't creanted any builds. [09:35] Does it have appropriately DistroArchSerieses and chroots configured? [09:36] It's the first package on this test launchpad server. [09:38] the DistroArchSerieses is default , ubuntu/utopic , and I have dput the chroot to it. [09:38] One cannot dput a chroot. [09:38] And ubuntu/utopic does not exist by default. [09:40] I can access the url of "launchpad.dev/ubuntu/utopic" [09:40] How did that get there? [09:41] and I run "LP_DISABLE_SSL_CERTIFICATE_VALIDATION=1 ./manage-chroot -l dev -s utopic -a i386 -f chroot-ubuntu-utopic-i386.tar.bz2 set" to dput the i386 chroot file . [09:41] Ah, so not dput. [09:42] How did you create utopic? [09:42] Does https://launchpad.dev/api/devel/ubuntu/utopic have a link to the chroot? [09:42] yes, It's exist [09:42] What is the full output of process-upload? [09:43] E: Build-Depends dependency for sbuild-build-depends-r-bioc-biovizbase-dummy cannot be satisfied because candidate version of package r-bioc-variantannotation can't satisfy version requirements [09:43] hm, that's less helpful than I might have wished for, apt [09:44] cjwatson: Hum,. [09:44] (trying apt-get build-dep after realising that of course the way sbuild constructs the dummy Sources means that --[no-]resolve-alternatives should be fine) [09:45] how to verify https://launchpad.dev/api/devel/ubuntu/utopic have a link to the chroot? [09:46] https://launchpad.dev/api/devel/ubuntu/utopic/i386, rather [09:49] oh, the chroot_url is empty. [09:49] The manage-chroot, then, did not succeed. [09:50] but I have ran "LP_DISABLE_SSL_CERTIFICATE_VALIDATION=1 ./manage-chroot -l dev -s utopic -a i386 -f chroot-ubuntu-utopic-i386.tar.bz2 set" , there is no error. [09:51] Consider that your hostname resolution my have issues. [09:51] Since you're using launchpad.dev everywhere, you must have some serious /etc/hosts entries. [09:52] in the file of "/etc/hosts" , the ip is the ip of launchpad server [09:52] Does api.launchpad.dev resolve to the right IP address? [09:55] how to verify api.launchpad.dev resolve to the right IP address? [09:56] ping or host, for example. [09:57] yes, when ping launchpad.dev, the ip is right [09:58] Not launchpad.dev. [09:58] api.launchpad.dev. [10:01] oh, ping apt.launchpad.dev , the ip is 127.0.0.88, t's incorrect ip ? [10:02] I can't tell you that. [10:02] I know not the layout of your configuration. [10:06] oh, I'm sorry , I looking in the wrong , on the machine who ran "LP_DISABLE_SSL_CERTIFICATE_VALIDATION=1 ./manage-chroot -l dev -s utopic -a i386 -f chroot-ubuntu-utopic-i386.tar.bz2 set" , ping api.launchpad.dev, the ip is the launchpad server [10:07] Can you see the relevant request in the Launchpad webapp access log? [10:09] where is the Launchpad webapp access log? [10:10] In the development configuration it is logs/launchpad-access.log [10:14] wgrant: Trying to understand http://bazaar.launchpad.net/~wgrant/launchpad/product-aps-set/revision/17584 - in what way does ProductSet.getProductPrivacyFilter require a policy or artifact grant? AFAICS it requires one of (product is public, product has no information type, policy grant) [10:16] cjwatson: AccessPolicyGrantFlat is a combination of AccessPolicyGrant, AccessArtifactGrant, and AccessPolicyArtifact. [10:16] That is, one can use it to query whether someone has a grant for a policy, or a grant for any artifact under that policy. [10:16] ah I see [10:17] Someone with a subscription to a bug in a private project must be able to see the project, so the AAG's row in APGF is what allows them that access. [10:17] So with this we'll basically stop using APGF for the project privacy filter [10:17] Currently implemented in getProductPrivacyFilter (buggily) and SharingService.checkPillarArtifactAccess. [10:17] Correct. [10:18] APGF was originally devised for the listing on +sharing, but it proved useful for access checks too. [10:19] Wait, no, not correct. [10:19] APGF will still be used. [10:19] What we're removing is the join through AccessPolicy. [10:25] * cjwatson looks at some more branches at once. I think I get it now. [10:25] We still use APGF to get the set of valid APs. [10:25] But we can now do that once, and compare it against the precached list on each Product. [10:25] Like we do on Bug/Branch/etc. [10:26] The only difference from the artifact case is that for Product we also allow artifact grants to count. [10:29] Would it be worth using AccessPolicySource.find instead of findByPillar so that we can query for just the types we care about? [10:29] (in _cacheAccessPolicies) [10:29] Also, GitRepository etc. calls it access_grants instead [10:30] Ah, yes, find is usable now. [10:30] access_grants is a list of Person.ids who have AAGs for the artifact. [10:31] Each artifact has access_grants (a list of Person.ids), and access_policies (a list of AccessPolicy.ids) [10:35] I think they mostly just have a single access_policy [10:36] Ah, yes, all except bugs have just one, that's true. [10:36] Bugs may have multiple, as they may affect multiple pillars. [16:33] wgrant: We actually can make useful use of dose-builddebcheck, I've worked out (after swearing a lot at apt-get build-dep's failure to provide useful failure information). Algorithm as follows. First, try sbuild, which I think should be modified to use apt-get build-dep anyway but it doesn't really matter here. If that fails and the output matches a build-dep failure of some kind, run dose-builddebcheck --explain --failures, ... [16:34] ... passing all candidate Packages files and a Sources containing just the source we're building. If it reports no reason, PACKAGEFAIL. If it reports a reason with no depchain, then that's a broken direct build-dependency and can safely be turned into DEPWAIT. If it reports a reason with a depchain, then that's an indirect broken dependency, and for the reasons you gave above that needs to be PACKAGEFAIL. [16:35] Not as potentially elegant as using dose-builddebcheck pre-build somehow, but we know that's hard in our setup, and this deals with the problems with getting an accurate dep-wait out of apt-get build-dep. [16:37] Takes 8s on my laptop to analyse something universe-sized, 1s to analyse something main-sized. [16:38] We could optimise a bit by only doing this in cases where it's hard to get sensible information out of apt-get build-dep. [18:45] wgrant: Any progress on webhooks, BTW? [19:57] cjwatson: Hm, that's extraordinarily slow. [20:09] *shrug* for something at the end of an sbuild run I'm not very bothered and haven't tried to optimise it [20:09] I've probably missed something [20:09] Yeah, just means it's completely impossible to run as something like retry-depwait eventually :/ [20:10] Unless I missed something blatantly obvious, which is probable :) [20:12] Also it depends how that scales; if it's spending most of its time just parsing the big Packages file then that's less of a problem than if it's lots of time analysing per-package. [20:26] morning [20:48] cjwatson: ah, thanks for catching the mode edge case [20:49] this bit of code did worry me in the past... glad we have a few more tests in place. [21:41] cjwatson: Did you look over product-aps-use? [22:07] wgrant: I got started on it, but only just [22:07] cjwatson: k [22:07] quite a bitty day, I had to take B out to go suit shopping [22:08] I won't land the first bits until you've confirmed it's all good. [22:08] Though I guess the schema patch is safe. [22:11] cjwatson: The refactored diff processing is much more readable, thanks. [23:19] wgrant: What do you think of asking webops for https://pastebin.canonical.com/134245/ ? It's obviously PII, but in discussion a while back we seemed to collectively think it would be OK ... [23:23] cjwatson: Sounds reasonable to me. [23:33] cjwatson: did you see my comment re: comment diff email? [23:34] currently we're not preserving comments on dirty headers at all (I can't recall if we agreed on that, or that's an oversight) [23:40] bzrlib parses a modified file with properties changed as a 'dirty header' [23:44] blr: Replied to your comment (indeed, you're correct) [23:45] blr: I don't think it should be possible for comments to go missing from any position, so that would be a good thing to fix. [23:46] Maybe just treat it much the same way you do comments on patch headers [23:53] cjwatson: The devel buildbot green is false. It looks like a query count test broke. [23:53] Though it may be spurious, as I don't see how you could have broken it. [23:53] thanks. [23:53] Unless it was test ordering, ew. [23:57] It looks like it could benefit from record_two_runs [23:58] It does fail on a single run here