[09:17] <guruprasad> rbasak, while the assignment work, using the value in any operation will require getting its representation, no?
[09:18] <guruprasad> Eickmeyer[m], I don't have the permissions to view the repository either. So I am unable to help. cjwatson, are you able to view this repository? If yes, can you check why Eickmeyer[m] is unable to access it?
[09:39] <rbasak> guruprasad: I understand why I get an exception later. But what I expected was to get the exception raised sooner - at the assignment.
[09:39] <rbasak> Or, I'd like a documented way to test if accessing the object will later fail. For now I'm using a repr() in my try: block.
[09:44] <cjwatson> rbasak: I think the lack of an exception at the getattr is a deliberate consequence of avoiding unnecessary network round-trips: a request isn't made to the actual object until absolutely necessary, which isn't until you try to access some attribute of that object.
[09:44] <cjwatson> rbasak: Accessing .self_link should be a reliable way to test if you can see the object.
[09:46] <cjwatson> Eickmeyer[m],guruprasad: It looks like this repository somehow has a private source package recipe associated with it.  (guruprasad, you should see a traceback that implies this.)
[09:47] <cjwatson> This should be redacted rather than causing an Unauthorized exception, but private source package recipes aren't really supposed to be a thing (maybe it's owned by a private team or something?) so views that render links to them may not be properly prepared for their existence.
[09:51] <RikMills> cjwatson: thanks. not sure how the upstream git import the recipe used got marked as private, but making it public fixes the recipe being private I think
[09:52] <rbasak> OK, thanks. I'm surprised that accessing self_link triggers it. Do objects not even know their own URI?
[09:52] <RikMills> and in turn fixes the original issue 
[09:53] <cjwatson> rbasak: .self_link isn't specifically optimized
[09:54] <cjwatson> RikMills: Weird, doesn't quite seem to match the exception I got, but OK, as long as it works :)
[16:07] <rbasak> cjwatson: FYI, I'm expecting to start ramping up git-ubuntu's universe coverage next week. I assume that's still fine; I'll do it gradually over a few weeks.
[16:11] <cjwatson> rbasak: Should be OK, we may just want to keep an eye on https://grafana.admin.canonical.com/d/oIhMaXhMk/launchpad-dash?orgId=1&refresh=5m&viewPanel=38 (if you can see that?)
[16:12] <cjwatson> rbasak: I don't believe there's a lot of free fast Ceph space at the moment, so if that starts getting too close to the ceiling we may have to ask you to abort
[16:13] <cjwatson> rbasak: (currently: 3.19TiB used, 1.61TiB free)
[16:16] <rbasak> cjwatson: thanks. Yes I can see that. I'll keep an eye on it.
[16:16] <cjwatson> Reasonable company-internal permissions, amazing
[16:18] <rbasak> If it helps, in an emergency you can block git-ubuntu-bot, and/or the repositories owned by git-ubuntu-import that aren't the default for the target aren't "live" and deleting them shouldn't impact users of the git-ubuntu CLI (only those hitting them manually)
[16:19] <rbasak> I expect that I'll bulk import a batch first, and only when the batch is done adjust their default targets, since I have to do that using my own permissions and not the bot's.
[16:19] <cjwatson> rbasak: it'd probably be quicker to yell at you ;-)
[16:19] <rbasak> Assuming I'm not asleep at the time :)
[16:19] <rbasak> (but I intend to do this when I'm around!)
[16:19] <cjwatson> I think we have reasonable alerts for git disk space, or at least we used to