[03:58] <hallyn> jibel: (holiday today) saw the emails...  will look tomorrow.
[04:38] <pitti_> Good morning
[06:35] <dholbach> good morning
[07:20] <Noskcaj> roaksoax, daily reminder, testdrive
[08:27] <pitti> meh, https://jenkins.qa.ubuntu.com/view/Saucy/view/AutoPkgTest/job/saucy-adt-ubuntu-drivers-common/28/ regressed -- tseliot, could it be that the new kernel broke our nvidia/fglrx drivers?
[08:28]  * tseliot looking
[08:29] <pitti> tseliot: fglrx fails with
[08:29] <pitti> AssertionError: 100 != 0 : update-alternatives: error: error creating symbolic link `/usr/lib/i386-linux-gnu/xorg/extra-modules.dpkg-tmp': No such file or directory
[08:30] <pitti> tseliot: nvidia-304 seems to work, but 304-updates, 310 and 313 don't
[08:30] <tseliot> pitti: I think I have a fix for the alternatives error
[08:30] <pitti> tseliot: ah, great; those are always from a clean
[08:31] <pitti> ... installation
[08:32] <tseliot> pitti: I assume mesa is not being installed for the tests right?
[08:33] <cjwatson> tjaalton: Your sssd/raring patch drops a linking fix (http://paste.ubuntu.com/5846034/).  Is that intentional?
[08:33] <pitti> tseliot: no, unless the nvidia packages pull it in as a dependency
[08:33] <pitti> tseliot: otherwise it's a minimal server install
[08:33] <cjwatson> tjaalton: If so I think it needs some reasoning for why it's a good idea to drop it in an SRU ...
[08:33] <tseliot> pitti: then it's bound to fail unless I add some directories in the debian/dirs files.
[08:34] <tjaalton> cjwatson: hmm I'll check
[08:34] <pitti> tseliot: so is there a missing dependency in the test case?
[08:34] <pitti> tseliot: i. e. do these assume some Xorg-ish package to be installed now? (they didn't in the past)
[08:34] <pitti> tseliot: I can add one easily
[08:35] <tseliot> pitti: either libgl1-mesa-glx or libgl1-mesa-glx-lts-quantal should fix it
[08:37] <pitti> tseliot: ok, running locally with that added
[08:37] <tseliot> pitti: good, let me know how it goes
[08:37] <tjaalton> cjwatson: good catch, must be some git fail by me.. reject and I'll reupload
[08:37] <tjaalton> no wait
[08:38]  * tjaalton runs a quick build-test
[08:41] <tjaalton> cjwatson: yeah, reject. I'll sort out the local mystery of where it got lost and reupload
[08:42] <tjaalton> "a oneliner apparmor rule change doesn't need a build check, right?"
[08:42] <tjaalton> ..
[08:42] <pitti> debdiff FTW
[08:42] <infinity> tjaalton: Probably doesn't, but every upload needs a debdiff. :)
[08:42] <infinity> pitti: Jinx.
[08:42] <tjaalton> hmm yeah
[08:43] <tjaalton> oh I see now
[08:44] <tjaalton> branched raring from a commit too early :)
[08:44] <ev> is there some trick to convincing ubuntu touch to use the newly installed kernel, beyond remounting /system rw?
[08:44] <cjwatson> tjaalton: done
[08:45] <ev> I've got a kernel from apw that I installed via dpkg, but uname -v is still reporting the build time of the previous kernel
[08:46] <apw> ev, if it was my kernel running it would have a unique version string too
[08:46] <apw> ev, did you only install the package
[08:46] <apw> ie did you do anything after
[08:46] <ev> apw: yeah, I just did dpkg -i on the three debs you gave me
[08:46] <ev> and did a sudo reboot
[08:47] <apw> hmmm no idea in container flip world if that works or not, before container flip it is definatly not enough
[08:47] <ev> apw: any idea where I prod the bootloader from on this?
[08:47] <apw> ogra_, ^^ does flash-kernel grok container flip yet or are we abootimg'ing still
[08:48] <ev> ah, I remember that thing from the last time I toyed with all of this :)
[08:48] <tjaalton> cjwatson: thanks, will upload again in a bit
[08:48]  * ogra_ wonders why he writes instructions in announcements :P
[08:48] <ogra_> apw, see the flipped container mail
[08:48] <ogra_> :)
[08:49] <ogra_> (and then use flash-touch-kernel)
[08:49] <ogra_> (with path to zImage as arg)
[08:52] <ev> I don't have an email that mentions flash-touch-kernel :), but fair enough
[08:52] <pitti> tseliot: that seems to fix fglrx, but not the nvidia ones
[08:52] <pitti> tseliot: expected?
[08:53] <tseliot> pitti: err... let me check nvidia
[08:53] <pitti> tseliot: I pushed that as f181e58e9cb8a204
[08:54] <pitti> tseliot: oh, it's because nvidia-310 is now a transitional pacakge
[08:54] <pitti> and so is nvidia-313-updates
[08:54] <tseliot> pitti: correct
[08:54] <pitti> tseliot: so I guess we want to update those to check 319 now?
[08:55] <tseliot> pitti: 319 and 319-updates, yes
[08:56] <pitti> running
[08:57] <ev> wooo http://paste.ubuntu.com/5846078/
[08:57] <ev> thanks apw and ogra_
[08:58] <apw> ogra_, so you have somewhere to refer us to when we are being wallies ?
[08:58] <tseliot> pitti: BTW how do you run those tests in ubuntu-drivers-common?
[08:58] <ogra_> heh
[08:58] <tseliot> pitti: the ones in debian/tests/system
[08:59] <pitti> tseliot: you can just run them with sudo debian/tests/system, but I usually run them in a full VM with run-adt-test -sUS file://`pwd` ubuntu-drivers-common
[08:59] <pitti> (to test my local changes)
[09:00] <tseliot> ah, nice
[09:02] <tjaalton> cjwatson: ok fixed version up
[09:13] <pitti> tseliot: success! thanks for the hints, uploading now
[09:13] <tseliot> pitti: excellent, thanks
[09:13]  * pitti wants green dots again
[09:14] <apw> ev, it would be good to know sooner rather than later if that test kerenl is any good, as i am considering an upload for grouper
[09:14] <ev> apw: ^ see above. Works a treat
[09:14] <ev> thanks!
[09:15] <apw> ev, awsome, i'll rack it and stack it then
[09:15] <cjwatson> tjaalton: thanks, accepted
[09:18] <ev> apw: cheers
[09:25] <ogra_> apw, ev, i'll add a kernel/postinst.d script so installing packages will DTRT
[09:29] <apw> ogra_, sounds good indeed
[09:35] <ev> ogra_, apw: fwiw the kernel packaging or something needs to remount /system rw since /lib/modules is a symlink to it
[09:49] <ogra_> ev, well, the whole disk  will be readonly soon
[09:50] <ev> ogra_: interesting. Why?
[09:50] <ogra_> ev, we will drop apt support and switch to image based upgrades
[09:50] <ogra_> (there will be a developer mode but it isnt clear yet what that will enable)
[09:50] <ev> ah, but surely we'll be write able to write to /var/log and friends?
[09:51] <ogra_> ev, security ... everything but the /data partition will be ro
[09:51] <ogra_> heh. indeed, the system will go on to function :)
[09:51] <ev> yay
[09:51] <ev> as long as we arrange for /var/crash to be writeable, I'm happy
[09:51] <ogra_> /run and /var/foo will indeed be tmpfses
[09:52] <ev> or whatever we set APPORT_REPORT_DIR to
[09:52] <ogra_> yeah, you might want the crashes persistent during the dev cycle, i guess using /data might be a good idea ... (we can then switch to a tmpfs after release)
[09:53] <ogra_> s/after/at/
[09:55] <ev> I'm not convinced they need to persist for too long, but it might be worth making them survive a reboot. I'd need to talk to pitti about it, but an upstart inotify watch that removed both the .crash and .upload* when .uploaded is present
[09:55] <ev> but if it's going to make things difficult, we wont lose a ton by living on a tmpfs
[09:55] <infinity> Having them survive a reboot is probably user-hostile on a shipping device anyway.
[09:56] <infinity> Assuming there's going to be some process that, at boot, tries to report it all and grinds my phone to a halt like apport does to my laptop.
[09:56] <ev> yeah, was just trying to consider the case where the device crashes in a way that causes the user to reboot it
[09:56] <pitti> ogra_, ev: yeah, we have some things which need to survive, such as suspend failure reports; the others are probably not all that interesting
[09:56] <pitti> (or in general kernel oopses)
[09:57] <pitti> well, storing those in a persistent location doesn't imply that we necessarily have to send them all right after boot
[09:57] <infinity> (Tell me that whatever is reporting crashes on the phone isn't going to be the system-killing mess that the current state of affairs is?)
[09:58] <infinity> Cause on an i7 laptop, booting with a dozen pending crash reports basically means an unusable system for half an hour.
[09:58] <infinity> Pretty sure that's a non-starter on an A9 phone.
[09:58] <ev> infinity: so to some extent that's mitigated by not having a UI on top of this
[09:58] <cjwatson> ogra_: /var/foo> do you know just how much of /var?  It would be helpful to know for click
[09:58] <ev> also, we'll skip the (existing) hook paths
[09:59] <ev> since there's no interactive UI (though not ruling out server-side hooks)
[09:59] <cjwatson> Actually, tmpfses don't help me
[09:59] <ogra_> ev, you can write to /dev/kmsg .... android uses a ram console by default, that means crashes going to that device (or actual kernel oopses) will persist in /proc/last_kmsg over reboots
[09:59] <ogra_> cjwatson, well, i guess we define how much, stgraber designs this, so if you have specific needs make sure he knows
[09:59] <ev> infinity: I'd happily rewrite it in C, but it's been deemed not a good investment of my time. We might be able to convince it to run under pypy though.
[10:00] <cjwatson> ev: 'cos pypy works so well on ARM
[10:00] <ev> I had not considered that :)
[10:00] <cjwatson> (Well, it might work, but it hasn't built in ages)
[10:00] <infinity> It builds, with sufficient resources.  Calxeda will save us all.
[10:00] <cjwatson> When it happens ...
[10:00] <infinity> Still have no idea how potentially awesome it is or isn't once it's built.
[10:01] <cjwatson> I think any modules you need would need pypy-* packages too?
[10:01] <cjwatson> Not quite sure how that stuff works
[10:02] <infinity> ogra_: Note that last_kmsg only survives a warm reboot, not a hard-crash-and-battery-pull sort of situation.
[10:02] <ogra_> not a battery pull but up to now it survived all crashes i needed data for :)
[10:02] <ogra_> cjwatson, if a tmpfs isnt enough you need to use /data
[10:03] <ev> infinity, pitti, others: if you have further suggestions as to how we can make this a less intensive experience on the phone, I'm all ears. One option would be a more aggressive compression of the report data, either by compressing more fields or switching to something that gives a better ratio (for the increased decompression cpu burden on daisy.u.c.)
[10:03] <infinity> cjwatson: I assume click has a /var/lib/dpkg equivalent that it needs to write to?
[10:03] <infinity> cjwatson: Probably best to just bindmount that to data/clickdb or something.
[10:03] <ogra_> ev, more compression means more bttery draining and more CPU hogging
[10:04] <cjwatson> infinity: Yeah, just wanted to know what my parameters are, and it would be nice to avoid lots of package-specific bind-mounts
[10:04] <ogra_> if you have the diskspace i would go for extensive compression
[10:04] <ogra_> *wouldnt
[10:04] <ev> ogra_: okay, so then just compressing the core file as we do now is probably the way to go
[10:04] <pitti> ev: more aggressive compression would probably be more CPU intensive? so that's a bandwidth/CPU tradeoff
[10:04] <ev> yeah
[10:04] <infinity> ev: Keeping it cheap on CPU is probably ideal.  Obviously, enormous reports over 3G data is also kinda crap, but hopefully they'll be smallish and infrequent...
[10:05] <ev> we wont send them over 3g, but yeah
[10:05] <ogra_> yeah, gzip with std options doesnt do much harm
[10:05] <ev> wifi only. I don't want to get forwarded cell phone bills
[10:05] <pitti> ev: but I guess if we really want to get a magnitude more performance out of this, we need to move from full core dumps and Python to something like minidumps and C/C++/vala..
[10:05] <cjwatson> ev: Hopefully it's a bit more fine-grained that that; some of us have reasonably adequate 3G limits
[10:05] <infinity> We won't?  Android sends reports over 3G (it asks me if I want to, first, mind you, and they're also not huge cores)
[10:05] <ogra_> xz with highest compression rate definitely does ... i guess you even feel it on the UI
[10:05] <cjwatson> *than that
[10:06] <ev> cjwatson: how do you make the distinction?
[10:06] <cjwatson> ev: ask
[10:06] <cjwatson> ev: system preference kind of thing
[10:06] <ev> https://wiki.ubuntu.com/ErrorTracker#Privacy_settings for what it's worth
[10:06] <pitti> ev: but with client-side dupe detection we hopefully won't actually have to send so many cores?
[10:06] <ev> mpt: ^
[10:06]  * ogra_ wouldnt allow reports via 3G ... period
[10:06] <ev> pitti: correct
[10:06] <infinity> ev: Android is littered with "use 3G for scary data" app options.  We could probably go one better and make it a global "just do it" option.
[10:06] <cjwatson> ev: I'd have expected this to be a system setting, not specific to errors
[10:06] <pitti> ogra_: certainly not a bad approach to begin with -- send them when enabling wifi
[10:07] <ev> we only ask for a core if we haven't already retraced it or don't already have it in the pipeline to be retraed
[10:07] <ogra_> pitti, exactly ...
[10:07] <cjwatson> Android's approach is pretty much right here IME
[10:07] <cjwatson> infinity: It has a general system-level setting too, doesn't it?
[10:07] <infinity> cjwatson: I'd argue Android's approach is a bit fragmented on the app level (I've had to tell several apps that it's okay to upload on 3G, had to tell VoIP apps that it's okay to make calls on 3G, etc... I'd love a system setting)
[10:07] <ogra_> cjwatson, androids approach ends up in a million of options ... i dont think thats good
[10:08] <infinity> cjwatson: If there's a system setting, that's new, which would explain all the apps also asking me. :P
[10:08] <cjwatson> Maybe I'm imagining it
[10:08] <infinity> (So, we should get it right the first time)
[10:08] <ev> pitti: minidumps would be pretty cool
[10:08] <pitti> ev: still a question of how useful they are going to be for actually fixing the crashes..
[10:09] <cjwatson> There's things like Data usage -> ... -> Restrict background data
[10:09] <pitti> knowing in which function a crash happens doesn't tell you taht much
[10:09] <ev> pitti: are we not able to just pair them with a symbol server and get a full stacktrace out?
[10:09] <infinity> cjwatson: Ahh, yeah, that's been there since 1.x
[10:09] <cjwatson> Anyway, I thought I remembered seeing a spec for our 3G usage that included something plausibly system-level for this
[10:09] <pitti> ev: no, minidumps don't have that information any more AFAIK; one really needs a core
[10:09] <ev> ah, rubbish
[10:09] <cjwatson> And of course Android has the mobile data limit thing
[10:10] <xnox> infinity: i think mpt draw a setting for data, where all apps are listed and one can flick them on/off for 3G usage.
[10:11] <xnox> mpt: shouldn't we have an upload service in addition to download service? u1, crash reports, files / photos.........
[10:13] <pitti> ev: but again, hopefully with SAS we won't need to upload all that many cores..
[10:14] <pitti> ev: perhaps we can dial down the aggressiveness of this for phones a bit, such as not requesting more cores until the first report (with matching SAS) finishes retracing and fails, and give up after three failures
[10:14] <pitti> ev: not sure how agressively it collects cores right now, perhaps it already works that way?
[10:15] <pitti> that still leaves the local processing, of course
[10:15] <mpt> xnox, on the contrary, I just now mailed ubuntu-devel-discuss@ saying I doubted it would be useful to have OS-level control  over which apps access the Internet.
[10:16] <xnox>  /o\ mpt I only have 500MB data plan, I want everything on WiFi only. But I want an app automatically start using 3G, if it's in the foreground - meaning I want it to have internet access over the scarce 3G
[10:17] <mpt> https://lists.ubuntu.com/archives/ubuntu-devel-discuss/2013-July/014653.html
[10:17]  * xnox should consider forking out 5GBP a month for unlimited 3G internet.....
[10:17] <mpt> I get 1 GB/month :-)
[10:17] <xnox> mpt: not ubuntu-phone mailing list? https://launchpad.net/~ubuntu-phone
[10:18] <mpt> xnox, I didn't decide where the discussion started
[10:19] <xnox> mpt: For me 0.5GB is fine, sitting at home/work on WiFi. But sometimes I am somewhere bored and decide to like pictures of all lolcats in my facebook newsfeed...... download a few games..... and it goes downhill from there.
[10:19] <mpt> yep
[10:22] <tjaalton> how evil would it be to ship an identical file in two binaries and adding Replaces: "the other" to them?
[10:23] <tjaalton> this trying to match sssd packaging on fedora, where they can ship the same file on to packages and they won't conflict
[10:23] <tjaalton> avoids creating a package which has just one file
[10:24] <xnox> tjaalton: what's the file and why it's needed?
[10:25] <tjaalton> it's a helper binary needed by ipa and ad backend
[10:25] <tjaalton> s
[10:25] <xnox> tjaalton: instead of conflicts it could be an dpkg-alternative file, than both packages can be installed. And if the file is the same, it doesn't matter which one is "active" and it will be guaranteed to be available if either of the two packages are installed.
[10:26] <xnox> tjaalton: would be nicer to have it in one place. Or can both packages be used without the other one installed?
[10:26] <tjaalton> yes
[10:27] <xnox> tjaalton: and can both be installed simultaniously and used as well? if yes, please don't make them conflict....
[10:27] <tjaalton> the backends got split into subpackages so you can only have what you need (ldap, krb5, ad, ipa)
[10:27] <tjaalton> I didn't mean to make them conflict, but add Replaces
[10:27] <cjwatson> identical + Replaces> that's really very evil and is a timebomb for later developers.  Use a -common package or something.
[10:28] <tjaalton> ok, figured
[10:28] <cjwatson> Using alternatives is also pretty confusing.  I wouldn't recommend that.
[10:40] <infinity> tjaalton: Mutual Replaces is more than just confusing, it's broken.  If you install A, then B, then remove B, the overlapping file disappears.
[10:40] <tjaalton> infinity: heh, ok.. didn't think of that
[10:42] <infinity> tjaalton: A -common, or alternatives, or a dpkg-divert all work fine, for varying levels of user and developer confusion around each option.
[10:42] <tjaalton> sssd-ad-common is least confusing
[10:43] <tjaalton> and really not much work either to "maintain"
[10:44] <tjaalton> or pac-common
[10:44] <tjaalton> anyway..
[10:49] <ev> pitti: yeah, it should already work that way (sorry for the delay - we're moving Cassandra to prodstack today).
[10:49] <pitti> ev: ooh!
[10:51] <ev> :) exciting times
[10:51] <pitti> doko_: ah sorry, I didn't realize that postgresql-9.1 -2 (FTBFS fix) didn't autosync; I adjusted your tcl config change to work in Debian as well, and will upload now
[10:52] <ev> Twelve 1 TB nodes to start with
[10:52] <pitti> 11 TB! holy c***
[10:52] <ev> :D we've got a lot of data
[10:52] <pitti> ev: do we really have that amount of data for errors, or is that "for futureproof"?
[10:53] <pitti> err, 12 TB, but whatever
[10:56] <ev> pitti: we have 6 TB of actual data
[10:56] <ev> with a replication factor of two
[10:57] <ev> (we want to keep the node sizes down to about 1TB, since earlier versions of Cassandra deal better with 1TB or less)
[10:58] <ev> pitti: it's part of the reason why I'm really keen on getting Hadoop running soon after we finish the move to prodstack (being on prodstack makes it much easier, since we can spin up a secondary cluster for just hadoop analytics)
[10:59] <pitti> so we have more error data than archive size :)
[10:59] <ev> errors.ubuntu.com will serve the common use case, but for real investigative work, having the whole dataset at your fingertips will be invaluable
[10:59] <ev> lol, an interesting way to look at it, yeah
[11:00] <ev> the nodes themselves will have 2TB of disk space each
[11:00] <ev> we think we're the first people to use Ceph for this
[11:07] <tunnelshade> Need some help in packaging
[11:08] <cjwatson> Please ask your question directly, rather than asking to ask
[11:11] <tunnelshade> I have to build a debian package which has to copy files and run postinst scripts. For this purpose I used debian/package.install. The install using deb was great. But when I tried to upgrade with the next version of .deb all the files previously copied are getting erased.
[11:11] <cjwatson> Might be easiest if we could see the source package.
[11:12] <cjwatson> At least the debian/* part.
[11:15] <tunnelshade> https://anonfiles.com/file/6696ee30a49cde33f51c4d4bf14bcf81 <= If this can do
[11:16] <cjwatson> tunnelshade: Can you give an example path which is erased?
[11:17] <tunnelshade> Initially during install the whole source gets copied to /opt
[11:18] <tunnelshade> so when I tried an update, there is no source folder in /opt
[11:18] <cjwatson> Giving exact paths would help.
[11:18] <cjwatson> (Bear in mind I'm unfamiliar with your package and am not planning to install it ...)
[11:18] <tunnelshade> /opt/package/
[11:18] <cjwatson> No matches for "/opt/package" in your debian/ directory.
[11:19] <tunnelshade> I didnt build it, I gave you the folder before building
[11:19] <tunnelshade> one second
[11:19] <cjwatson> Generally speaking you should try to avoid your postinst fighting with the list of files shipped in your .deb
[11:19] <cjwatson> So any files listed by 'dpkg -c foo.deb' you should regard as read-only
[11:20] <cjwatson> Most problems of this kind are because the postinst is illicitly fiddling about with files shipped in the .deb
[11:20] <tunnelshade> But I am just using them for reading
[11:21] <cjwatson> Then I'd need to see 'dpkg -c' output for the old and new versions, I think
[11:21] <tunnelshade> Almost same, with some new files
[11:22] <cjwatson> Try cleaning everything out, installing the old version, then "dpkg --unpack owtf_newversion.deb", check whether the files are still there, "dpkg --configure owtf", check again
[11:22] <cjwatson> That will allow you to tell whether it's dpkg itself or your postinst that's removing the files
[11:24] <tunnelshade> The files are gone with --unpack itself
[11:24] <cjwatson> Were they there before the --unpack?
[11:25] <tunnelshade> yes
[11:25] <cjwatson> *Which path*?
[11:25] <cjwatson> Exact copy and paste.
[11:25] <tunnelshade> /opt/owtf/
[11:25] <cjwatson> (Incidentally, sudo in a preinst makes no sense; the maintainer scripts always run as root.)
[11:26] <tunnelshade> that is just a draft, first I get things working, I can cleanup
[11:26] <cjwatson> You mean that /opt/owtf itself is entirely absent?
[11:26] <tunnelshade> yes
[11:26] <cjwatson> Then it must be your postinst's fault
[11:26] <tunnelshade> (Just to remind I am using install file inside debian folder for copying files)
[11:26] <cjwatson> Assuming that those files are in owtf_oldversion.deb at all
[11:27] <cjwatson> How you got the files into the .deb is irrelevant
[11:27] <tunnelshade> So if I disable postinst you say that things should go fine
[11:27] <cjwatson> They're being unpacked and then apparently deleted; the only thing that runs that has the opportunity to delete them is your postinst
[11:27] <cjwatson> Well, if I were you I would fix the postinst, not disable it
[11:28] <tunnelshade> Fixing means??? Not touching the files
[11:28] <cjwatson> You might like to execute bits of it step by step to see where exactly it's going wrong
[11:28] <cjwatson> Not removing the files!
[11:28] <tunnelshade> ah
[11:28] <cjwatson> I don't know exactly where it's doing that
[11:28] <cjwatson> Because it's calling out to external scripts I don't have
[11:28] <cjwatson> I can see that it's removing /opt/owtf/dictionaries/cms-explorer, which is almost certainly a bad idea
[11:29] <cjwatson> But not where it's removing all of /opt/owtf
[11:29] <cjwatson> However, nothing else has the opportunity to do that ...
[11:29] <tunnelshade> I am also blown by what is happening
[11:30] <tunnelshade> For information, what is the best place if I wish to run some scripts after installation and using the installed packages
[11:30] <cjwatson> postinst
[11:32] <cjwatson> There's nothing fundamentally wrong with doing post-installation steps in the postinst (although in general the less maintainer script code you can get away with writing the better), but from what you've told me your current code would appear to be somehow responsible for the deletions you're complaining about, so start there.  It doesn't run in a particularly magic environment or anything; you can just try stepping ...
[11:32] <cjwatson> ... through your commands from there as root and seeing where things get blown away.
[11:33] <tunnelshade> I completely removed the postinst, but still the same
[11:33] <tunnelshade> By same I mean, the whole /opt/owtf got removed
[11:34] <tunnelshade> Ah, please take a look, I think the error is in my postrm
[11:36] <cjwatson> Ah!  Yes, indeed
[11:37] <tunnelshade> removing files for upgrade as well
[11:37] <cjwatson> That doesn't make a whole lot of sense.  Try to make it so that you only remove things created by the preinst/postinst.
[11:38] <tunnelshade> Ok, so I have to remove postrm from the oldversion package as well right
[11:38] <tunnelshade> Inorder to test
[11:38] <cjwatson> postrm bugs can be awkward to recover from, yes.
[11:43] <tunnelshade> Thanks a lot for your time cjwatson
[11:43] <cjwatson> no problem
[11:43] <tunnelshade> My problem is solved
[11:44] <tunnelshade> Can you give me any suggestions other than above, to change in my package
[11:46] <cjwatson> Generally you should try to depend on packaged Python modules rather than doing a load of pip install calls.
[11:46] <cjwatson> I'm afraid I don't have time for a full review though
[11:47] <cjwatson> (At least three urgent-ish things to do today of which I have started on about 0.5)
[11:47] <tunnelshade> The python packages are outdated in the repos
[11:47] <tunnelshade> so I have to depend on pip
[11:47] <cjwatson> Or package new versions
[11:48] <tunnelshade> I am a cyber security guy, just wanted to package a tool I develop :(
[11:48] <tunnelshade> So I am not a debian guru
[14:19] <mdeslaur> cjwatson: is there an example manifest file somewhere other than the file format document?
[14:36] <A1Recon> Has 802.11ac gone beyond draft?
[14:43] <cjwatson> mdeslaur: there's one inside http://people.canonical.com/~cjwatson/tmp/com.ubuntu.apps.camera_2.9.1daily13.06.13_all.click - possibly not hugely informative though
[14:43] <mdeslaur> oh, sweet, and actual click package! :)
[14:43] <mdeslaur> s/and/an/
[14:43] <mdeslaur> cjwatson: thanks :)
[14:44] <cjwatson> people kept asking me for one so I cobbled something together.  I don't promise not to break it
[14:46] <mdeslaur> cjwatson: so, we don't seem to have a "display name" field in there...ie: "Camera"
[14:46] <mdeslaur> cjwatson: is that something we want?
[14:46] <dholbach> can somebody please reject https://code.launchpad.net/~malizor/ubuntu/saucy/ubuntu-wallpapers/fix-for-1177260/+merge/171911?
[14:46] <mdeslaur> cjwatson: or is the "title" supposed to be that, but is a bad example in your click package?
[14:47] <dholbach> hey mvo, does lp:~dylanmccall/update-manager/dialogs-refactor look all right to you?
[14:49] <cjwatson> mdeslaur:     "title": "Camera application",
[14:50] <mdeslaur> hrm
[14:50] <cjwatson> mdeslaur: isn't that a display name?
[14:50] <cjwatson> mdeslaur: title is meant to correspond to the first line of Description:, basically
[14:50] <mdeslaur> cjwatson: ok, sounded like a short description to me
[14:50] <cjwatson> mdeslaur: I think it would be OK to consider it as display-name
[14:50] <cjwatson> It's basically convention
[14:50] <mdeslaur> ok
[15:02] <Guest33546> stgraber, hey, I haven't heard heard back from highvoltage yet, when he confirms which way we can do the call, I will kick it off
[15:02] <Guest33546> weird, this is jono
[15:02]  * Guest33546 restart XChat
[15:03] <stgraber> jono_: I pinged him on IRC a few minutes ago but haven't heard back yet
[15:03] <jono_> stgraber, same here
[15:03] <jono_> stgraber, lets give him a few mins, and then if he doesn't respond maybe we can go ahead and I can talk to him later in a different call
[15:05] <stgraber> jono_: ok. I had a chat with him a couple days ago about our position wrt Mir, so at least our views should be pretty much aligned :)
[15:05] <highvoltage> jono_, stgraber: I' back
[15:05] <highvoltage> *I'm back, even
[15:05] <stgraber> hey highvoltage!
[15:05] <jono_> oh hey highvoltage :-)
[15:05] <stgraber> highvoltage: can you do g+?
[15:05] <highvoltage> (ran a few minutes late with giving someone a lift)
[15:06] <jono_> highvoltage, np
[15:06] <highvoltage> yep, I'm logged in
[15:06] <jono_> highvoltage, stgraber cool, I will set it up now and send a link
[15:07] <jono_> highvoltage, stgraber https://plus.google.com/hangouts/_/e24720733bd142ef469692ef34939e0af9292a57?authuser=0&hl=en
[15:07]  * highvoltage has not used hangouts in a while so might just have to configure a bit
[15:08] <jono_> highvoltage, no worries :-)
[15:25] <dholbach> @pilot out
[15:25] <dholbach> (more piloting on monday :))
[15:26] <dholbach> tsimpson, ^ do you know what happened with the bot?
[15:29] <tsimpson> dholbach: look like it died when freenode exploded, I'm bringing it back
[15:29] <dholbach> thanks!
[15:31] <tsimpson> there it is