[01:14] <wgrant> blr: Landed the getAdministratedTeams timeout fix, so hopefully qastaging will be a bit happier soon.
[01:42] <blr> wgrant: ah great
[03:00] <maozhou> I build package of glibc on my launchpad server,then I have published it ,but the state of glibc is still Uploading ,why?
[03:01] <wgrant> maozhou: What do you mean by "published it"?
[03:02] <wgrant> It can't be published if it hasn't been uploaded yet.
[03:04] <maozhou> I run "echo "123123" |sudo -S scripts/process-upload.py -vvv --builds -C buildd /var/tmp/builddmaster"
[03:04] <maozhou> LP_DISABLE_SSL_CERTIFICATE_VALIDATION=1 /home/kylin/ubuntu-archive-tools/queue -l dev -s utopic accept
[03:04] <maozhou> scripts/process-accepted.py -vv ubuntu
[03:04] <maozhou> scripts/publish-distro.py -vv
[03:04] <maozhou> scripts/publish-distro.py -C
[03:05] <wgrant> maozhou: What's the 'echo "123123"'?
[03:05] <wgrant> process-upload.py does not read from stdin.
[03:06] <maozhou> the password of the user who run the script
[03:07] <wgrant> That's certainly unusual.
[03:07] <wgrant> You'll need to check the logs from process-upload.py to see why your build was not uploaded.
[03:08] <maozhou> on another launchpad server ,  I do it by same way ,the state of the pachage is successful
[03:09] <maozhou> where is the logs from process-upload.py?
[03:09] <wgrant> stderr
[03:35] <maozhou> wgrant: the log is http://imgur.com/mVLK4PE    http://imgur.com/25WeYTn    http://imgur.com/B8RcZg2   , but I can't find any message about "glibc"
[03:39] <wgrant> maozhou: You seem to have a lot of warnings and errors there. I'd be working those out before trying to understand why another upload hasn't appeared.
[03:40] <wgrant> But considering searching /var/tmp/builddmaster for a directory name ending in PACKAGEBUILD-$BUILDID, where $BUILDID is the ID of the build that is stuck Uploading.
[03:42] <maozhou> And I find a pachage which bigger than 256M, can't  be uploaded succeed
[03:42] <wgrant> What goes wrong when you try?
[03:45] <maozhou> If a package builded succeed on my launchpad, if the binary package of it  is bigger than 256M, when run process-upload.py , It's error.
[03:45] <wgrant> Error?
[03:47] <wgrant> We regularly deal with packages larger than a gigabyte without a problem.
[03:48] <wgrant> It's conceivable that there may be a problem with doing it on a 32-bit system, but I've not seen anything like that.
[03:50] <maozhou> yes, the OS is 12.04 32bit
[03:51] <wgrant> Ah, that's inadvisable for a production deployment.
[03:51] <wgrant> I'd expect it to work, but it's not thoroughly tested at scale.
[03:54] <maozhou> the  launchpad server  should be deployed on 64bit OS, right?
[03:55] <wgrant> maozhou: All of our production systems are running on amd64, yes.
[03:55] <wgrant> maozhou: The test suite works fine on i386, but it doesn't test large datasets.
[03:58] <maozhou> ok, I konw
[03:59] <maozhou> I know
[07:23] <cjwatson> wgrant: My tentative plan for dose-builddebcheck was to see if it could be run on the builder side
[07:30] <wgrant> cjwatson: What benefit does that give us?
[07:31] <wgrant> cjwatson: AFAICS it just makes it less likely that we can autoretry depwaits.
[07:31] <cjwatson> Actual accurate build-dep analysis rather than guesswork
[07:31] <cjwatson> s/build-dep/dep-wait/
[07:31] <cjwatson> The case at hand in this bug could dep-wait rather than fail.
[07:31] <wgrant> In the current environment we need to be quite conservative with depwaits.
[07:32] <wgrant> Though it's already imperfect because of virtual packages, I guess.
[07:32] <cjwatson> One thing I was thinking of was that we could try sbuild, and if it fails to install build-deps we could then run dose-builddebcheck to find out why.
[07:32] <wgrant> That is, we should avoid creating depwaits that can't be automatically retried when the dep becomes available.
[07:33] <wgrant> Returning a deeper dep makes that more likely, as one of the deps in between could change such that the dep still doesn't exist, but the package's build-deps are now satisfiable.
[07:33] <cjwatson> In which case you'll get a new answer from d-b
[07:33] <wgrant> Only if we run it again.
[07:34] <wgrant> On what condition do we trigger it to rerun?
[07:34] <wgrant> There are a few depwait builds in PPAs and such.
[07:34] <cjwatson> On a new build ...
[07:34] <wgrant> Right, that means we lose depwait autoretry.
[07:34] <wgrant> Then what value does depwait provide?
[07:34] <cjwatson> Um?  No we don't
[07:35] <wgrant> foo depends on bar
[07:35] <wgrant> bar depends on baz, which doesn't exist.
[07:35] <wgrant> baz isn't meant to exist; it's a bug in bar.
[07:35] <wgrant> New bar is uploaded, not depending on baz any more.
[07:35] <cjwatson> I'm proposing having launchpad-buildd run it if sbuild reports a missing dep
[07:35] <wgrant> foo's build is depwait on baz, and will never be autoretried.
[07:35] <cjwatson> As a replacement for the regexes
[07:36] <cjwatson> Ah, right, yes, that's a problem.
[07:36] <wgrant> Currently in that case we fail.
[07:36] <wgrant> And everyone knows a failure is sinister.
[07:36] <cjwatson> Though, the way that d-b works means that we would have control over how deep a dep-wait we choose to accept.
[07:36] <wgrant> (a depwait may be sinister in the virtual case, but non-existent virtual packages are conveniently rare)
[07:39] <lifeless> wgrant: is it ever dexter?
[07:40] <wgrant> lifeless: Most of the time :)
[07:45] <cjwatson> Hm, but d-b doesn't tell us the full chain, I clearly misremembered that
[07:45] <cjwatson> So not as workable as I thought :(
[07:45] <wgrant> d-b is useful, I think, if you can run it beforehand and then run it regularly until it succeeds.
[07:46] <cjwatson> Right, as you say that's difficult with lots of archives and bootstrap etc. though.
[07:47] <cjwatson> Although the bootstrap archive could be considered to be under manual control only, I suppose.
[07:49] <cjwatson> There are two cases there: port bringup and bootstrapping cycles in normal operation.  In the latter you normally have a specific sequence of packages in mind and don't want other things to automatically pop up.
[07:49] <cjwatson> Not sure about the former.
[07:57] <wgrant> Yeah.
[08:00] <cjwatson> If we can ignore that case then it becomes "just" a performance problem of grabbing all the Packages/Sources files to alphecca for any current dep-wait build and analysing the lot.
[08:00] <cjwatson> And stopping trying to figure out the dep-wait target from sbuild output.
[08:01] <wgrant> The quotes are the reason I haven't tried it.
[08:01] <cjwatson> I should see how long it would actually take.
[08:05] <wgrant> Unfortunately we can't exactly optimise it like we should publish-distro.
[08:05] <wgrant> That is, skip archives that haven't changed.
[08:05] <cjwatson> Indeed.
[08:05] <wgrant> Because people occasionally do upload to the primary archive, and a few things depend on it.
[09:12] <maozhou> I dputed a package to my launchpad server, and accepted it , but the queue is empty, why?
[09:22] <wgrant> maozhou: Why would it be in the queue after you'd accepted it from the queue?
[09:24] <maozhou> the build queue is empty, there are no build job after accepted the package.
[09:25] <wgrant> maozhou: What did process-upload say about it?
[09:31] <maozhou> it say "Finished checking upload" and "Setting it to ACCEPTED"
[09:32] <wgrant> Did it say it created any builds?
[09:32] <wgrant> Should it have created any builds?
[09:32] <wgrant> Have builds previously worked on that series on that instance?
[09:35] <maozhou> NO , it haven't creanted any builds.
[09:35] <wgrant> Does it have appropriately DistroArchSerieses and chroots configured?
[09:36] <maozhou> It's the first package on this test launchpad server.
[09:38] <maozhou> the DistroArchSerieses is default , ubuntu/utopic , and I have dput the chroot to it.
[09:38] <wgrant> One cannot dput a chroot.
[09:38] <wgrant> And ubuntu/utopic does not exist by default.
[09:40] <maozhou> I can  access the url of "launchpad.dev/ubuntu/utopic"
[09:40] <wgrant> How did that get there?
[09:41] <maozhou> and I run "LP_DISABLE_SSL_CERTIFICATE_VALIDATION=1   ./manage-chroot -l dev -s utopic -a i386 -f chroot-ubuntu-utopic-i386.tar.bz2 set" to dput the i386 chroot file .
[09:41] <wgrant> Ah, so not dput.
[09:42] <wgrant> How did you create utopic?
[09:42] <wgrant> Does https://launchpad.dev/api/devel/ubuntu/utopic have a link to the chroot?
[09:42] <maozhou> yes, It's exist
[09:42] <wgrant> What is the full output of process-upload?
[09:43] <cjwatson> E: Build-Depends dependency for sbuild-build-depends-r-bioc-biovizbase-dummy cannot be satisfied because candidate version of package r-bioc-variantannotation can't satisfy version requirements
[09:43] <cjwatson> hm, that's less helpful than I might have wished for, apt
[09:44] <wgrant> cjwatson: Hum,.
[09:44] <cjwatson> (trying apt-get build-dep after realising that of course the way sbuild constructs the dummy Sources means that --[no-]resolve-alternatives should be fine)
[09:45] <maozhou> how to verify  https://launchpad.dev/api/devel/ubuntu/utopic have a link to the chroot?
[09:46] <wgrant> https://launchpad.dev/api/devel/ubuntu/utopic/i386, rather
[09:49] <maozhou> oh, the chroot_url is empty.
[09:49] <wgrant> The manage-chroot, then, did not succeed.
[09:50] <maozhou> but I have ran "LP_DISABLE_SSL_CERTIFICATE_VALIDATION=1   ./manage-chroot -l dev -s utopic -a i386 -f chroot-ubuntu-utopic-i386.tar.bz2 set" , there is no error.
[09:51] <wgrant> Consider that your hostname resolution my have issues.
[09:51] <wgrant> Since you're using launchpad.dev everywhere, you must have some serious /etc/hosts entries.
[09:52] <maozhou> in the file of "/etc/hosts"  ,  the ip is the ip of launchpad server
[09:52] <wgrant> Does api.launchpad.dev resolve to the right IP address?
[09:55] <maozhou> how to  verify  api.launchpad.dev resolve to the right IP address?
[09:56] <wgrant> ping or host, for example.
[09:57] <maozhou> yes, when ping launchpad.dev, the ip is right
[09:58] <wgrant> Not launchpad.dev.
[09:58] <wgrant> api.launchpad.dev.
[10:01] <maozhou> oh, ping apt.launchpad.dev , the ip is 127.0.0.88, t's incorrect ip ?
[10:02] <wgrant> I can't tell you that.
[10:02] <wgrant> I know not the layout of your configuration.
[10:06] <maozhou> oh, I'm sorry , I  looking in the wrong , on the machine who ran  "LP_DISABLE_SSL_CERTIFICATE_VALIDATION=1   ./manage-chroot -l dev -s utopic -a i386 -f chroot-ubuntu-utopic-i386.tar.bz2 set" , ping api.launchpad.dev, the ip is  the launchpad server
[10:07] <wgrant> Can you see the relevant request in the Launchpad webapp access log?
[10:09] <maozhou> where is the Launchpad webapp access log?
[10:10] <wgrant> In the development configuration it is logs/launchpad-access.log
[10:14] <cjwatson> wgrant: Trying to understand http://bazaar.launchpad.net/~wgrant/launchpad/product-aps-set/revision/17584 - in what way does ProductSet.getProductPrivacyFilter require a policy or artifact grant?  AFAICS it requires one of (product is public, product has no information type, policy grant)
[10:16] <wgrant> cjwatson: AccessPolicyGrantFlat is a combination of AccessPolicyGrant, AccessArtifactGrant, and AccessPolicyArtifact.
[10:16] <wgrant> That is, one can use it to query whether someone has a grant for a policy, or a grant for any artifact under that policy.
[10:16] <cjwatson> ah I see
[10:17] <wgrant> Someone with a subscription to a bug in a private project must be able to see the project, so the AAG's row in APGF is what allows them that access.
[10:17] <cjwatson> So with this we'll basically stop using APGF for the project privacy filter
[10:17] <wgrant> Currently implemented in getProductPrivacyFilter (buggily) and SharingService.checkPillarArtifactAccess.
[10:17] <wgrant> Correct.
[10:18] <wgrant> APGF was originally devised for the listing on +sharing, but it proved useful for access checks too.
[10:19] <wgrant> Wait, no, not correct.
[10:19] <wgrant> APGF will still be used.
[10:19] <wgrant> What we're removing is the join through AccessPolicy.
[10:25]  * cjwatson looks at some more branches at once.  I think I get it now.
[10:25] <wgrant> We still use APGF to get the set of valid APs.
[10:25] <wgrant> But we can now do that once, and compare it against the precached list on each Product.
[10:25] <wgrant> Like we do on Bug/Branch/etc.
[10:26] <wgrant> The only difference from the artifact case is that for Product we also allow artifact grants to count.
[10:29] <cjwatson> Would it be worth using AccessPolicySource.find instead of findByPillar so that we can query for just the types we care about?
[10:29] <cjwatson> (in _cacheAccessPolicies)
[10:29] <cjwatson> Also, GitRepository etc. calls it access_grants instead
[10:30] <wgrant> Ah, yes, find is usable now.
[10:30] <wgrant> access_grants is a list of Person.ids who have AAGs for the artifact.
[10:31] <wgrant> Each artifact has access_grants (a list of Person.ids), and access_policies (a list of AccessPolicy.ids)
[10:35] <cjwatson> I think they mostly just have a single access_policy
[10:36] <wgrant> Ah, yes, all except bugs have just one, that's true.
[10:36] <wgrant> Bugs may have multiple, as they may affect multiple pillars.
[16:33] <cjwatson> wgrant: We actually can make useful use of dose-builddebcheck, I've worked out (after swearing a lot at apt-get build-dep's failure to provide useful failure information).  Algorithm as follows.  First, try sbuild, which I think should be modified to use apt-get build-dep anyway but it doesn't really matter here.  If that fails and the output matches a build-dep failure of some kind, run dose-builddebcheck --explain --failures, ...
[16:34] <cjwatson> ... passing all candidate Packages files and a Sources containing just the source we're building.  If it reports no reason, PACKAGEFAIL.  If it reports a reason with no depchain, then that's a broken direct build-dependency and can safely be turned into DEPWAIT.  If it reports a reason with a depchain, then that's an indirect broken dependency, and for the reasons you gave above that needs to be PACKAGEFAIL.
[16:35] <cjwatson> Not as potentially elegant as using dose-builddebcheck pre-build somehow, but we know that's hard in our setup, and this deals with the problems with getting an accurate dep-wait out of apt-get build-dep.
[16:37] <cjwatson> Takes 8s on my laptop to analyse something universe-sized, 1s to analyse something main-sized.
[16:38] <cjwatson> We could optimise a bit by only doing this in cases where it's hard to get sensible information out of apt-get build-dep.
[18:45] <cjwatson> wgrant: Any progress on webhooks, BTW?
[19:57] <wgrant> cjwatson: Hm, that's extraordinarily slow.
[20:09] <cjwatson> *shrug* for something at the end of an sbuild run I'm not very bothered and haven't tried to optimise it
[20:09] <cjwatson> I've probably missed something
[20:09] <wgrant> Yeah, just means it's completely impossible to run as something like retry-depwait eventually :/
[20:10] <cjwatson> Unless I missed something blatantly obvious, which is probable :)
[20:12] <cjwatson> Also it depends how that scales; if it's spending most of its time just parsing the big Packages file then that's less of a problem than if it's lots of time analysing per-package.
[20:26] <blr> morning
[20:48] <blr> cjwatson: ah, thanks for catching the mode edge case
[20:49] <blr> this bit of code did worry me in the past... glad we have a few more tests in place.
[21:41] <wgrant> cjwatson: Did you look over product-aps-use?
[22:07] <cjwatson> wgrant: I got started on it, but only just
[22:07] <wgrant> cjwatson: k
[22:07] <cjwatson> quite a bitty day, I had to take B out to go suit shopping
[22:08] <wgrant> I won't land the first bits until you've confirmed it's all good.
[22:08] <wgrant> Though I guess the schema patch is safe.
[22:11] <wgrant> cjwatson: The refactored diff processing is much more readable, thanks.
[23:19] <cjwatson> wgrant: What do you think of asking webops for https://pastebin.canonical.com/134245/ ?  It's obviously PII, but in discussion a while back we seemed to collectively think it would be OK ...
[23:23] <wgrant> cjwatson: Sounds reasonable to me.
[23:33] <blr> cjwatson: did you see my comment re: comment diff email?
[23:34] <blr> currently we're not preserving comments on dirty headers at all (I can't recall if we agreed on that, or that's an oversight)
[23:40] <blr> bzrlib parses a modified file with properties changed as a 'dirty header'
[23:44] <cjwatson> blr: Replied to your comment (indeed, you're correct)
[23:45] <cjwatson> blr: I don't think it should be possible for comments to go missing from any position, so that would be a good thing to fix.
[23:46] <cjwatson> Maybe just treat it much the same way you do comments on patch headers
[23:53] <wgrant> cjwatson: The devel buildbot green is false. It looks like a query count test broke.
[23:53] <wgrant> Though it may be spurious, as I don't see how you could have broken it.
[23:53] <cjwatson> thanks.
[23:53] <wgrant> Unless it was test ordering, ew.
[23:57] <cjwatson> It looks like it could benefit from record_two_runs
[23:58] <cjwatson> It does fail on a single run here